Skip to content

Releases: tenstorrent/tt-metal

v0.46.0

05 Apr 13:57
Compare
Choose a tag to compare

📦 Uncategorized

  • user-triggerable C++ post-commit suite
  • #6406: add missing position_ids/attention_mask to bert demo
  • #6282: Add AdamW
  • #6315: Fix dprint tests for T3000
  • FD2: prefetch stall, dispatch wait, linear read, delay and cleanup
  • #6609: update wording in demo section of main README.md
  • #6364: Autocomplete for pybinded types
  • Asarje/ttnn rn50 b20
  • FD2.0 Test - Fix l1 buffer not page-size aligned in after FD-on-eth changes to L1_UNRESERVED_BASE
  • #6593: Add resharding to Llama2 model when possible.
  • #6572: Fix ttnn.repeat_interleave example in documentation
  • #5780: Re-enable 100K enqueue program stress test on grayskull
  • Enable basic width sharding support in all-gather
  • Alex/metal/remove cb wait markers
  • #6657: Use sysmem manager cq size instead of recomputing it each time…
  • #0: (MINOR) Add Grayskull purchase link and update version to 0.46.0
  • #5063: add TopK API to metal
  • #5480: FD2.0 Test - Fix test_prefetcher for dram paged read test (-t 3) on whb0
  • Fix logit low pcc
  • Backward op - Fixed ldexp, hardsigmoid and asin
  • #6598: Fix softplus
  • Add support for BFP4_B tensor serialization
  • Eltwise mul for different batch size
  • #6575: Split docs into separate Metalium and nn docs
  • #0: Add two separate links for documentation (tt-metalium/ttnn) on README
  • #6361: Update ttnn repeat to use correct shapes when formatting output
  • #0: Sayonaraaaaaaa
  • FD2.0 Test fix test_prefetcher add_paged_dram_data_to_worker_data dropping start_page
  • #5785: Watcher ringbuffer implementation
  • Add FD 2.0 WriteHost Command
  • #0: Put back frequent api tests because I'm an idiot
  • Optimize All Gather Interleaved Worker send/receive
  • #0: changing all #include common/* to #include tt_metal/common/*
  • #6676: Fix issues related to unary lte and gte
  • #5817: Fix lerp
  • #6589: Fix for relu_bw
  • #6633: Backward test update
  • #0: Skip logit, logiteps test
  • #0: Testing CI fix
  • #5480: Update test_prefetcher to pass added hugepage args to dispatch kernel
  • Fix l1 acc, add whb0 optimized conv tests
  • Alignment fix for eth core kernels
  • Add data parallel (multi-chip) for Falcon7b (prefill/decode) model and corresponding tests
  • CQ_DISPATCH_CMD_WRITE_PAGED support in test_dispatcher and passing tests
  • #6647: disable failing ci cpp tests and reenable cpp pipeline on CI
  • Backward test updates
  • Ngrujic/check bugs
  • Add Llama matmul perf tests to main
  • TTLIB: removing working tests from broken
  • #6443: Update backward asin and addcdiv logic
  • #0: Fix output cb size calculation in reshard op for bfp8b
  • #0: use smart ptrs in allocator
  • Jvasilje docs 0322
  • DRAM based device profiler with Tracy support
  • #6553: Fix ttnn.reshape(..) handling for bfloat16, TILE_LAYOUT
  • PR: #6746
  • Add Llama2 demo to tt-metal docs
  • Mistral-7B WH demo
  • Revert "#0: Put back frequent api tests because I'm an idiot"
  • FP32 support
  • #0: Add back frequent api tests to run.sh
  • Bteng/watcher ci3
  • Remove cpuprof
  • logo update
  • #6184: sharded row major silu support.
  • #6443: Update div_bw and backward ops test file
  • #6705: Relax forcing of keyword argument in ttnn.open_device
  • Forward op tests
  • #6691: Allow blocking of inner dim within a core for shaded in0 for 2d and 1d systolic matmuls
  • #6662: Width Sharding support for eltwise OP
  • Stable diffusion python API level perf improvements
  • Add get_compute_kernel_config_args function
  • #0: Add fd-2/main triggers for pull_request and push for post-commit
  • #5480: FD2 refactor for pre/dis patch variants
  • #6654: Add perf tests for ttnn ResNet50
  • #5480: Fix fd gtest unit test test_write_host
  • #0: Set myself as setup.py owner
  • #6780: Add mistral7b to demos list in getting started
  • #4003: re-added TTNN_ENABLE_LOGGING as runtime flag
  • #0: Fix semaphore address gen bug
  • #6769: Disable program caching for failing Llama tests.
  • #5480: Fix zero sized write transaction request that could occur in write_linear_host
  • #6077: Fix unet pcc issues
  • Remove DstSync from llk api templates
  • FP32 Support
  • #6680: Reverting move op change
  • #6443: Update asinh and softsign backward
  • Backward tests with updated test modules
  • Ngrujic/check bugs 1
  • #6654: Moving init for self.compute_kernel_config
  • #6805: reproduce the bug with sharded split_query_key_value_and_split_heads
  • #6832: Account for tile-padding in softmax for mistral 7B
  • Enable support for uint32 format to be consumed by SFPU (issue #4624)
  • #4252: fix clang build error since std::log2 only constexpr in gcc
  • #4003: log, debug and add pre- and post- hooks only for top-level ttnn ops
  • #6823: Fix core count to not include dispatch cores in op reprot
  • #6197: Align pages for interleaved <-> sharded.
  • METALIUM_GUIDE
  • Bteng/watcher post commit
  • #6443: update backward test file for relational ops and concat op
  • Revert "Bteng/watcher post commit"
  • #6443: Update backward ops
  • Backward test updates
  • #0: Add the dim 0 support repeat backward
  • Update hard related test ops
  • #6757: Remove set_profiler_location
  • #6443: Update backward ops erfinv elu hypot cos sin
  • #6861: Enable Watcher/dprint tests on T3000 CI
  • Update Mistral perf regression for CI, until issue is resolved
  • Mamba/perf v1
  • #0: remove data movement ops related to silu in SD
  • #4003: added proper fallback for getitem of ttnn.Tensor. Slice the tensor only on the tile boundary but set the shape based on whatever user provided
  • #4003: added proper fallbacks for every op that falls back to torch
  • #6731: add fix to LN width sharding
  • #5797: add back sweep test for ln
  • Integrate GroupNorm V2 to SD model
  • METALIUM_GUIDE.md updates
  • [Falcon7b] Fix bugs with inference throughput measurements in demo
  • #0: shallow unet add perf_mode
  • #6154: 2d matmul in0 height, in1 width sharding
  • #5249: Various Falcon40b test and demo cleanup
  • #0: fix incremental build
  • #0: remove upsample spill to DRAM
  • [Llama2 Prefill] Model Functionality completed
  • Watcher alignment checking for PCIe/DRAM <-> L1
  • #6920: fixed the error in whisper
  • Update METALIUM_GUIDE.md
  • #6644: save l1 buffers to data base
  • Update usage.rst
  • #6804: fix ttnn falcon7b demo regression + add to CI regressions
  • #6285: Add backward support for floor round and div_no_nan
  • [skip ci] Update INSTALLING.md
  • #6873: Add more test combinations to tt_lib sweeps add, add_unary, su…
  • Ngrujic/check bugs 3
  • #6882: Updated Mistral-7b perf estimate
  • #6850: Update install links in Sphinx docs to point directly to INSTALLING.md
  • #6619: Fix per op profiler sum
  • #6644: sync before calling print l1 buffers
  • Barsic/ttlib ops check
  • Barsic/ttlib params fix
  • #6962: Move cd tt-metal earlier in the command list of INSTALLING.md
  • #6819: Add support for CreateKernel absolute file paths
  • #6356: Remove half-half grid logic for bmms
  • #4003: added a flag to disable ttnn fallbacks. Don't throw an error w…
  • #0: Correct FW versions, tt-smi versions, and add note about tt-topology
  • #0: Capitalize tt to TT consistently for marketing
  • #0: Add myself as CODEOWNER for INSTALLING.md
  • #6644: ttnn visualizer
  • #6847: Allow disabling individual watcher features
  • #6889: Support printing/padding/tilizing multi-device tensors
  • #4003: removed ttnn.print_l1_buffers and consolidated all ttnn flags into a CONFIG class
  • #6217: tt_lib async mode support (single chipp tensors supported)
  • Reshard With Ranges
  • #4003: updated buffer report to show...
Read more

v0.45.0

22 Mar 18:03
Compare
Choose a tag to compare

🚀 Features

  • #6204: added support for num_users < 32 for update cache op.
  • #6247 Llama2 Galaxy MLP implementation

📦 Uncategorized

  • #4736: Add support for moreh_norm op
  • Fix moreh_layernorm rstd
  • #5508: Change test_moreh_layernorm.py for debugging
  • #4686: add infra for sharing global struct among ops
  • #5592: Fix pcc on Falcon 7b prefill by turning on l1 packer on MLP 4h-to-h matmul
  • Fix layernorm beta data format reconfig
  • Add linked support for in0 in1 mcast in matmul
  • #4957: optimizing construct_2d_padded_tensor_list
  • #4003: added ttnn.as_tensor and enabled support for caching torch tensor
  • Revert "#0: Fix for fail in asinh backward"
  • #5829: Use moreh_common.hpp for data movement kernels across moreh OPs
  • Barsic/ttnn ops
  • #6030: Update resnet performance metrics
  • #5876: pytest & c++ test logging cleanup
  • #0: Use both 2x2 and 2x4 machines on every scheduled run
  • Add single core matmul benchmark
  • #6079: Update FORCE_INLINE to be nop when watcher is enabled
  • #5980: Fix a hard-coded bounds check in dprint
  • #5389: merged ttl and ttnn tensor classes into one
  • Initial Performance Model
  • fix ci
  • TTNN RN50 :: on the road to match perf with TTLIB version
  • #4438: Optimized single-core fold op
  • #5589: Add repeat-interleave and addcmul sweeps
  • #6055: Add square backward support
  • #6057: Add backward support for lgamma
  • #6056: Add backward support for frac and trunc
  • #6066: Add support for backward log sigmoid
  • #6002: Add backward support for binary maximum
  • Ngrujic/improve conversion to bfloat8b in sweeps
  • #5829: Use moreh_common.hpp for compute kernels across moreh OPs
  • #0: Remove post-commit label from multi device pipeline because it's not actually post commit
  • Add pack l1 acc to resnet conv
  • #6144: Skip 512x512 cross attn 2d upblock for now in nightly because it hangs
  • #6061: Add tanhshrink, threshold, Unary EQ backward ops support
  • Width Sharded Concat for Unet
  • #5184: uncommenting various moreh test case.
  • Fix compute kernel config arg for resnet50
  • Nsmith/untilize unit test
  • Revert "Revert "#5389: merged ttl and tensor classes into one""
  • #4438: Do not use the new fold op in Resnet tests
  • Remove corerangeset that does not work on wormhole
  • #6129: Expose kernel config attrs and use 4 dst tiles for fp32 configs
  • #5391: Add device perf
  • #0: Use multiplier for wormhole b0 mulsi3
  • #4003: removed ttnn.Tensor autoclass from tensor.rst
  • TTNN MultiDevice Support
  • build artifacts
  • #4947: Add noc alignment checks to watcher
  • Add ttnn multi-chip unit test for checking device shards
  • Nsmith/fix unet
  • #6043: Random program stress test of command queues
  • Logit and logiteps backward support
  • Backward support for log2
  • Add missing ttnn tests and disable broken tests until issues are fixed
  • Fix Events feature for FD1.3 (out-of-order event ids, events feature missing) #6093
  • #5873: make top-level post commit workflow re-useable
  • #5589: add groupnorm for ttnn sweeps
  • Ngrujic/ttnn sweeps 4
  • Add ethernet datamover (EDM) - a foundational ethernet transfer engine
  • #6116: Add backward support for softshrink
  • #0: Add verbose make logs to artifact and make nicer name on metal
  • #0: Only use 2x4 setup for multi-card WH CI as 2x2 does not provide us good feedback
  • #4809 dprint tensix regs
  • #4003: fixed bloom perf test
  • #6187: Conv bugfix
  • #0: concat RM support variable stick widths across inputs
  • TTNN RN50 on WHB0
  • #6084: Lower thresholds slightly after using proper configs for device resnet
  • Fast dispatch 2.0 proof of concept
  • #6218: add pytest for matmul 1d 2d
  • #6177: use is_tensor_storage_on_device so it works for MultiDeviceStorage
  • #6082: support workers + eth cores in one program
  • #6215: Rename TensorToMeshMapper/MeshToTensorComposer
  • #6164: Update test_noc_unicast_vs_multicast_to_single_core_latency to not use same cores for producer and consumer on WH
  • #6117: Add backward support for softplus
  • #6223: remove redundant call to context switch
  • Integrate EDM with all-gather.
  • #6136: Add backward support for unary LE and GE
  • #5398: fix unicast binaries
  • Barsic/ttnn ops 2
  • #5380: Add wormhole_b0 model perf tests, only falcon7b in ttlib for now
  • #5372: Updated README.md file for demo
  • #4003: updated ttnn.concat to have a registered fallback
  • Llama2 functional bringup
  • #5589: Add working BFLOAT8_B sweeps to working folder
  • FD2.0 rename HostQ->PrefetchQ, add multi-core capability, fix NOC coords
  • #0: bugfix in ttnn resnet caught by nightly
  • #0: fix tt_bisect build bug
  • Watcher Asserts
  • #6183: add unit test for sd matmul ops
  • #6254: Make program cache per device:
  • #5394: Add functional version of Mamba architecture
  • #6257: Add temporary convenience script for 800MHz / new eth reset dependent CI
  • #5661: Enable gtests for fast dispatch + R chip
  • Alex/metal/bmm large block untilize out
  • #5389: made tensor attributes public and use ttnn::Shape instead of tt::tt_metal::Shape for storing shape
  • Revert "#6183: add unit test for sd matmul ops"
  • #4003: print all of the L1 buffers using ttnn.print_l1_buffer_state
  • #4003: print all of the L1 buffers using ttnn.print_l1_buffers
  • #4438: Implement sharded multi-core fold op for Resnet50
  • #6149: disabled the check for comparing generated report with GOLDEN_L1_BUFFER_REPORT becauson pipelines it looks different than when running locally
  • FD2.0 fixes+mcast support for write and packed_write
  • Shwetank tt/config
  • #0: Change order of device and use_program_cache fixture in remaining pytests
  • Softplus with beta and threshold param
  • Build tests during artifact creation
  • #6149: disabled test_print_l1_buffers_of_add_operation
  • #4003: updated ttnn.to_torch to work with bfloat8_b tensors that are not multiple of tile size without tile padding
  • #0: add to/from L1 reshard test
  • #0: Add back deleted shape assertions for interleaved concat
  • test errors flagged by watcher
  • #0: fix incremental build
  • Merge xuncai/llama-attention-galaxy to main: First version of llama-attention galaxy on emulated chips
  • #6329: Fixing a bug causing mismatch on indices
  • #6321: Test which sweeps read/write buffer and just checks that the e…
  • Support moreh_getitem forward
  • #6125: Update in0_block_w to be full shard width for sharded 2D systolic matmul
  • #6107: Add softsign, sign, unary ceil backward support
  • #6226: Add backward support for div
  • #6234: Add backward support for rdiv
  • #6236: Add backward support for fmod and remainder
  • #4003: added positional embeddings to bert and updated ttnn_sharded_optimized_bert to run with batch size of 12
  • Indexed Fill
  • #5589: remove dtype in gen function sweep tests where needed
  • #6347: Print built-in defines once only
  • #0: Add Mo as code owner on profiler code
  • #0: Simplify tt_lib.scripts package by adding a specific tt_eager/scripts directory and putting the production scripts in there, whereas development scripts will stay in /scripts
  • #0: Fixture reorder changes reverted for falcon_7b perf test
  • #5424: remove metal_ckernel_sfpu
  • #0: Update remaining tt_lib.program_cache calls to use device APIs
  • #6183: add unit test for sd matmul ops
  • #6289: fix dispatcher page calculation
  • #5924: Enable unet on wormhole_b0 changes
  • #6325: skip test_multi_device.py for grayskull arch
  • Alex/metal/pack untilize no repack
  • #6144: Not hanging on GS or WH with or without Watcher
  • Agrebenisan/swq hwq cardinality cleanup
  • #6146: Add backward support for conj
  • #0: bug fix UTWH div_up instead of div trunc for calculating CB sizes
  • Fix To/From Sharded Bug
  • #6206: Fix resharding page mapp...
Read more

v0.44.0

27 Feb 15:57
Compare
Choose a tag to compare

📦 Uncategorized

  • Update CreateBuffer to return shared_ptr, and Enqueue R/W buffer to accept std::shared_ptr
  • #4794: Implement DownBlock2D using ttnn for stable_diffusion model
  • #4797: Implement BasicTransformerBlock sub-module using ttnn for stab…
  • #0: write cluster config for FD mode, non tunneling cores as well
  • Update bw test, change mulsi calls to use *
  • #3003: updated tt-lib documentation
  • #0: Update to v0.44.0
  • #4003: added ability to trace ttnn operations using torchtrail library
  • Support moreh logsoftmax
  • #4614: gitmodules: Use https URLs for submodules
  • #0: add reviewers to frequently touched ops docs file
  • backward ops - hypot and atan2
  • #4885: Move program device map to program
  • #4858: Add support for float to int typecast
  • Matmul_block on a smaller grid size
  • Revert "#0: Add support for typecast float to int"
  • Add dst ethernet router support and remote command processor to accept FD packets on remote chip
  • Falcon40B TT Implementation
  • #5198: Fix moreh softmax related bug
  • #0: skip MOREH Softmax tests from main
  • #3122: Use device grid size in falcon_attention to be genereric...
  • #0: Add assertions for interleaved tensors for ops that don't support sharding
  • #5169: Add activation ops to ttnn
  • #3003: add duration to the ttnn operation nodes when TTNN_ENABLE_LOGGING=1 is used to compile the code
  • #5027: Optimize group attn matmul for Falcon40B decode
  • #0: add documentation about managing documentation
  • Adding docs for maxpool, avg pool and upsample
  • Revert "#0: skip MOREH Softmax tests from d5811b7
  • #5165: Add hyperbolic ops to ttnn
  • #4866: Add grayskull open source llk-library
  • #5002: simplified preprocessing of CNNs using preprocess_model
  • Create GroupNorm sharded in TTNN
  • #5097: Support for dedicated completion queue thread
  • upsample test calculate grid
  • fix for sharded allocater when num banks == num cores
  • MHA tutorial interactive notebook with diagrams
  • #4003: Adding a profile tutorial
  • #0: Added non-blocking read stress test
  • Revert "MHA tutorial interactive notebook with diagrams"
  • #0: Update all_gather to work for multi_link. Update falcon-40b to use 2 links for all gathers
  • #5142: Remove slow dispatch mode from workgin sweeps
  • #3003: fixed the input tensor documentation
  • #0: Temp slower resnet VM run
  • throw on fast dispatch for to_host_sharded as its not supported
  • #5253: Fix kv_past_len being passed in to rotary embedding for falcon models
  • #5233: started adding ttnn_functional_resnet
  • #3003: updated ttnn documentation to explain what features it has over tt_lib. Added standalone examples of basic usage of ttnn
  • #0: Speedup incremental builds
  • #0: Change setup.py to be git worktree friendly
  • MHA tutorial interactive notebook with diagrams
  • #3003: disable tutorial 6 from running as the unit test
  • Agrebenisan/non blocking tensor reads
  • #5275: CODEOWNERS: update to include files relevant for ttnn team
  • Fix an intermittent launch message transfer error
  • Revert "MHA tutorial interactive notebook with diagrams"
  • #0: add parens in LLK doc
  • #3003: only unit test tutorials that work on pipelines
  • #5246: Add unary math ops to ttnn
  • Vignesh/stable diffusion ttnn basic transformer block fix
  • #4854: Implement attention and rms_norm sub-module using ttnn for mis…
  • #4795: Add upblock2d to functional stable diffusion model
  • #4796: Implement Transformer2DModel using ttnn for stable_diffusion m…
  • #0: Adding llk wormhole_b0 submodule
  • #4003: Adding pyind11 to ttnn
  • #5296: Fix broken link to host_api.hpp in README.md
  • #0: Fix bug with the way we were measuring bert inference time
  • #0: Change local tt_lib._C module install from symlink to copy
  • #5233: added ability to fold batch_norm2d into conv2d
  • #5222: replace hex8_to_hex32.py with cpp to shave off some compile time -temporary fix
  • Enable tests for WHB0
  • #5137: Cleanups for newer Linux distro / toolchains
  • #5233: implemented support for converting all Resnet-18 modules using preprocess_model function
  • #3003: fix model preprocessing bug
  • #4799: Implement CrossAttnDownBlock2D sub-module using ttnn for stabl…
  • #4800: Implement UNetMidBlock2DCrossAttn using ttnn for stable_diffus…
  • #4798: Add ttnn cross attn upblock2d in functional stable diffusion m…
  • #4801: Implement Unet 2D Condition model using ttnn for stable_diffus…
  • #4965: Rename Conv2D to Conv2d and MaxPool2D to MaxPool2d to match torch
  • #0: Remove departed team member from CODEOWNERS
  • #0: add to codeowners
  • #5314: Only stall on first scheduled read after commands with side effects
  • #4965: fix bad rebase
  • #0: Add more instructions for dispatching workflow actions and a note about skipping git hooks
  • Update optimized Bert to support WH grid sizes, add sharding support for RMSNorm
  • #4642: create gtest_smoke as a sanity test suit
  • #5341: context switch if eth txq is full
  • #5323: Convolutions of small size fail during parallelization calculations
  • Npetrovic/transformer softmax
  • Fix groupnorm for narrow channels
  • #4862: added more test for ttnn bloom. Update optimized ttnn bert to match the structure of non-optimized ttnn bert
  • #0: Add an envvar parser with value detection and default value setti…
  • #4732: Clean up compute kernel apis
  • #5318: Modify Falcon7B to use attn_matmul for wormhole
  • #0: make logLocationsRecord a static function
  • #5233: run convs with auto-format
  • #5377: Avoid segfault by checking buffer !null before getting device
  • Alex/metal/pack untilize b0
  • #4487: Support block sharding in upsample
  • #5359: update python package transformers + dependencies to include Falcon
  • #3708: Add support for LN having gamma/beta in bfp8
  • #4003: Skip sweep tests if not available
  • #4003: use faster TMs in optimized ttnn whisper
  • #4732: Clean up compute_kernel_api
  • More optimizations for group_attn_matmul
  • #5233: updated resnet18 to run residual connections
  • #3003: added more meaningful errors to ttnn. Updated getitem to run on device in the cases when it can
  • #5233: simplified the logic in tracer
  • #3003: include ttl operations and necessary types under ttnn.ttl
  • #0: Add note about no merge commits in main
  • #0: Add timeout in profiler regression workflow
  • codeowners update
  • #5365: Add device argument to determine grid size based on target
  • disable whisper until further investigation, see issue #5430
  • #3003: fixed ttnn convs
  • #3886: Fix build error for C++ tests in debug mode
  • #4954: Support depth 32 in maxpool writer
  • #0: Pass output cb to pack init functions
  • #0: skipping DeviceLoadBlankKernels on remote devices
  • #5359: transformers: update version and relax pcc asserts
  • #3003: guidelines for adding new op
  • Don't assume user has one entry in their $PYTHONPATH
  • FP32 tensor support for matmul
  • #3003: updated tutorial 001 to describe the tensor more comprehensively before showing the add
  • Onboard additional metal code owners
  • #5402: Add redesigned host-side sw command queue, it can be configured i…
  • #3003: fixed docs
  • Alex/metal/enable conv tests on b0
  • #5356: git bisect script to find broken commits
  • #0: Update data_format.cpp file
  • Add skip to full grid matmul whb0
  • #3003: simplified the logic in ttnn/operations/matmul.py. Added dataclasses instead of tuples for CoreGrid and ShardShape
  • #5204: adding moreh's test suit. removing an absolute assertion.
  • Npetrovic/lt gt ne fix
  • #0: Move device id attribute from tensor to DeviceStorage
  • #3003: fixed scheduled pipeline
  • Npetrovic/transformer concat sweeps ttnn
  • #3003: added support for running ttnn.matmul using 1D_systolic_array. Also, added support for passsing in the program config directly
Read more

v0.43.0

08 Feb 18:02
Compare
Choose a tag to compare

📦 Uncategorized

  • #4668: Yolov5 GS Demo Benchmarking
  • #0: uplift umd; pick up fix for n150 cluster
  • #3178: Fix for wormhole b0 reduce w
  • #4489: fixed bugs in the program caching of eltwise unary and eltwise binary. Updated bloom to use L1 memory config
  • #4821: Add cumsum op to tt_dnn
  • Dispatch/Bandwidth tests
  • #4003: fixed test_eltwise_unary_op
  • Argmax and Argmin Support
  • #3212: softmax works after reduce fix of max, sum, etc. for WHB0
  • #0: (MINOR) Update version to v0.43.0
  • #4761: Add call to ttl repeat_interleave and also provide script for …
  • #4003: fixed the bug with printing the compile-time attributes
  • Support moreh arange
  • Remove skip_for_wormhole_b0 for test_moreh_softmax and test_moreh_softmin
  • #4541: remove unpad start at 0 limitation
  • Agrebenisan/restart cmd fix
  • Support moreh SGD
  • #0: Use fetch-depth: 0 instead of fetch-tags because otherwise git complains of commit SHA/tag conflict
  • #0: Add code owners for primary operations api binding
  • #4547: Add 2x2 window unit tests to ttnn maxpool
  • #4003: restructure ttnn
  • #4889: Change TileSlice printing to only print tile data
  • #4836: Add support for blocking conv activation in 2d systolic conv v…
  • #0: Update unicast cycles lower bound
  • #4904: Add support for 1d width sharded LN
  • #4941: Convert command header to struct for easier maintainability
  • #4823: enable sum_0 operation fails with low PCC [Wormhole,Grayskull]
  • Fix sharded buffers for one core in fast dispatch
  • #4906: global reduce sum, mean, max, min operations added
  • Revert "#4823: enable sum_0 operation fails with low PCC [Wormhole,GS]
  • #0: Change codeowners from specific op binding files/dirs to all tt_lib bindings
  • #4003: split unary sweep into per op sweeps
  • #4232: added support for converting from numpy arrays to ttnn tensors. Borrow data whenever possible when converting from numpy/torch
  • Uplift AttnMatmul to support GroupAttnMatmul
  • Add watcher-specific CI tests
  • #4916: Add avg pool to ttnn
  • #0: Add a lock on DPRINT server raise/wait structures
  • #4967: added validation for input tensors
  • #4971: update documentation by a new doc hierarchy;
  • #0: Leftover decorate_operation replacement for avg pool
  • #4899: fix the permute to operate on the intended shape
  • #4730: Add tt_lib.tensor.concat
  • Aliu/enqueue eth
  • #4003: Updating functional performance from changes in ttnn.permute w…
  • #4984: Remove dead OP_INFO and graph interpreter
  • #4878: initial commit to add Conv parameters to ttnn.preprocess_model_parameters
  • Update Program Hashes for Ops using Mem config
  • #4984: Remove unused dprint functionality
  • Aliu/ci fix
  • #4215: Add Argmax and Argmin Fallback
  • #4999: added input tensor validation to add, sub and mul operations.
  • Support for softmax rm major sharding and causal mask sharding
  • #0: provide API for where() to support scalar True/False branches
  • #5003: Update expected compile and runtimes for perf regression on VM
  • Revert "Update Program Hashes for Ops using Mem config"
  • #4931: add apis to get ethernet by socket ids
  • #4786: Add upsample_nearest2d functional stable diffusion
  • #4986: deploy docs only to main and enable devs to run docs build on different pages
  • Deploy ttnn sweeps results to docs
  • #4958: Move all python api unit tests to frequent in order to reduce SD pipeline length
  • #4999: Added input validation for ttnn.matmul and ttnn.linear. Add unit test for linear operation. Update input tensor validation in binary.py. Fix compute_output_shapes in bmm_op.cpp
  • #4620: Fix+improve bw test
  • #4852: Add unit tests for functional bloom
  • #5032: scalar argument versions for relops
  • #0: Add some README recommendations from MCW to clarify issue about access to internal workflows VM installation page
  • #4790: Implement GEGLU using ttnn for stable_diffusion model
  • #4999: Adding validation checks
  • #4791: Implement Feedforward sub-module using ttnn for stable_diffusi…
  • Npetrovic/bw ops sweeps
  • #4999: update documentation of ttnn operations to include the validation schema
  • #0: Remove model run from frequent_api_pipeline per @tt-rkim
  • Minor dprint/watcher cleanup
  • #4858: Add support for typecast
  • #0: Disable dprint tests because they're flaky at the moment
  • #4946: Add trig ops to ttnn
  • Nshanker/convs split by 2
  • #4946: Add inv trig ops to ttnn
  • #4003: fixed circular dependency in decorators
  • #5054: Removed asserts from conv op host code that are not required. …
  • #4003: fixed circular dependencies in ttnn
  • #4852: Fix CI pipeline by re-enabling functional bloom for causal LM
  • GroupNorm Sharded. support
  • #4972: is_sharded and memory_config is free from tensor
  • #0: eltwise ops/activate operator tracking for GS, and WHB0
  • Aliu/fd tunneling pr
  • #4642: Converted 14 old cpp tests to use gtest, with capabilities to switch btwn FD/SD when possible
  • #4852: Add tests for functional ttnn bloom implementation.
  • #4003: correctly convert all parameters of torch module to ttnn parameters
  • #5082: Pow gradient calculation method is different with pytorch
  • Argmax/Argmin support for channel, batch and all dim
  • #4420: switch to shared_ptr
  • #4420: return shared_future from taskflow async wrapper
  • Minor DPrint fixes
  • #0: Enable/disable clearing L1 from env var
  • #4003: started moving ttnn operation to C++
  • #4003: Add script to help with finding issues that we need approval for
  • #5044: Adding support for optional output tensors
  • #4003: Adding the open flag to show only open PRs
  • #5048: Add CreateDevices and CloseDevices api to detail
  • decouple ClearProgramCache from CommandQueue
  • Conv fixes for padding input channels. Shallow conv fixes. Conv input/output autoformatting. Cleanup
  • Asarje/mp unpack tilize fused
  • Update CreateBuffer to return shared_ptr, and Enqueue R/W buffer to accept std::shared_ptr
  • #5137: Cleanups for newer Linux distro / toolchains
  • Revert "#5137: Cleanups for newer Linux distro / toolchains"
  • Revert "Update CreateBuffer to return shared_ptr, and Enqueue R/W buffer to accept std::shared_ptr"
  • #4793: Implement ResnetBlock2D using ttnn for stable_diffusion model
  • #4788: Implement Downsample2D using ttnn for stable_diffusion model
  • #4792: Implement CrossAttention sub-module using ttnn for stable_diff…
  • #4747: Reduce amount of samples in bert sweeps
  • #4789: Add upsample2d to functional_stable_diffusion model
  • #0: Add fix for lamb optimizer
  • #5057: Add relational ops support to TTNN
  • skip eth test suite on GS
  • #4003: updated ttnn.Tensor to be derived form ttl.tensor.Tensor
  • Asarje/shwetank upsample
  • #5082: power gradient is erroneous when exponent is in range (0-1)

v0.42.0

26 Jan 14:59
Compare
Choose a tag to compare

📦 Uncategorized

  • Syrmia/new sweeps
  • Update test sweeps for the system memory input buffer
  • #4181: Add bfloat8_b dtype fix for tests that should support bfloat8_b
  • #4343: Add new op sweeps for GS and WH
  • #0: (MINOR) Update to v0.42.0
  • #4311: Automate determining and scheduling RC generation
  • Jedi main
  • #0: Remove path appends from test files
  • #4003: Adding padding for whisper
  • #4632: Add dprint server support for eth cores
  • #4003: added ttnn.group_norm
  • #4003: added ttnn.silu
  • #3999: move fallback_ops.silu -> tt_lib.tensor.silu
  • #4683: Support tracing
  • #0: Patch for bad state reached when enqueuing trace
  • Nshanker/remove pow of 2 req for channels size
  • #4003: added ttnn.pad
  • #4730: Adding ttnn.concat as fallback
  • #4003: added ttnn.split
  • Syrmia/ttnn sweeps
  • #4347: Move VGG tensors to L1
  • #4670: Add end to end demo for functional roberta model
  • #4431: mnist gs_demo benchmark
  • #4623: lenet gs demo benchmarking [Pending CI]
  • #4720: Improve folder structure of broken sweep tests
  • Adding interface to assign dispatch kernels to dispatch functionality and adding kernel to service remote command queue
  • #4003: Fixing whisper pcc in last layer
  • #4003: updated ttnn unit tests to assert using higher PCC thresholds
  • #4761: Adding fallback for repeat_interleave
  • #4003: simplified the logic in to_layout
  • #4003: added ttnn.log
  • #4003: updated ttnn.to_layout and ttnn.pad to do the right thing with padded shape
  • #0: Fix reference to Python integration test in README
  • #0: As a quick fix for now, source /etc/rc.local to re-insert number of hugepages back in after starting weka service in perf pipelines
  • #4003: updated model names
  • #4617: Matmul went to 0.9998887677925289 with float comparison to torch
  • #0: Fix bad access to memconfig/device when input tensors are on host
  • #4503: Demo for functional bloom
  • #4611: Add end to end test for ViT model with ImageNet data
  • #4506: SSD gs demo benchmarking
  • #4504: Add end to end demo for functional t5 model
  • #4557: Uplift swin model to resolve errors in tests & Add test_perf_accuracy...
  • #4556: Roberta gs demo benchmarking
  • #3974: nanogpt uplift and move weights to weka path
  • #4610: EfficientNet gs demo benchmark
  • #4003: added more sweeps
  • #4231: Fine-tune the unary ops for add, sub, div, mul binops with one scalar constant arg
  • #516: Sanity check tracy artifact generation
  • #4003: fixed crashing sweep tests
  • #0: Update get_semaphore to return 16B aligned semaphore addresses
  • #0: Add tracy dependencies to github actions runner workflows
  • #4730: Add sweep test for ttnn.concat
  • Update ops for sharding used in falcon 40b
  • #4833: Create initial ttnn sweeps with csv artifact upload
  • #4003: debugging whisper
  • #4003: Setting all = [] to block whild card imports
  • TTNN Sharded tensor support
  • #3662: Impl moreh_clip_grad_norm
  • #4609: Deit gs demo benchmarking
  • #4741: Add sum op to tt_dnn
  • #4622: Yolov3 GS demo Benchmarking
  • #0: Add weka mount + force hugepage mount with /etc/rc.local in frequent pipelines
  • #0: Reduce timeout of multi queue single device FD post commit
  • #4003: Make ttnn sweep tests available from pytest
  • Add MaxPool2d to ttnn
  • Ttnn 4761 add sweep for repeat interleave
  • #0: Remove checkout secret
  • #4847: Error out when there are insufficient num hugepages
  • simpler hugepage check
  • Revert "#4839: simpler hugepage check"
  • #4862: Disable test_moreh_clip_grad_norm_with_error_if_nonfinite
  • #4374: Benchmarking for bloom TT model
  • #4505: Add end to end demo for functional bert model
  • #4003: updated documentation
  • #4003: updated concat operation to raise an exception if the dimension is out of range
  • #0: Losen models perf tolerance for GS
  • #0: Add more instructions on syseng assets installation + direct users to additional hugepages setup if needed for cloud VMs
  • #4815: New restart command which safely resets a command queue into a starting state
  • Revert "#4815: New restart command which safely resets a command queue into a starting state"

v0.41.0

13 Jan 21:15
Compare
Choose a tag to compare

Metal

API Changes

  • tt::tt_metal::detail::GLOBAL_CQ replaced with tt::tt_metal::detail::GetCommandQueue(Device *device)
  • New num_hw_cqs parameter to specify underlying number of HW CQs for a given Device: Device *CreateDevice(chip_id_t device_id, const uint8_t num_hw_cqs = 1, const std::vector<uint32_t>& l1_bank_remap = {});

Tools

Profiler

  • Integrated Tracy host-side CLI capture and csv report generation with metal’s profiler infrastructure
  • Added support for device profiling on ethernet cores for Wormhole systems.

ttNN

Infrastructure

  • Updated ttnn documentation with visualizations and examples
  • Added padded shape to ttnn
  • Renamed ttnn.nlp to ttnn.transformer
  • Updated ttnn.transformer.split_query_key_value_and_split_heads to handle most shapes, multi head query and cases when key_value_states are used to compute key and value
  • Added ttnn.rms_norm
  • Added ttnn.Shape and exposed support for padded shape. Simplified broadcasting and reduction operations
  • Moved ttnn.Tensor to C++
  • Added debug decorator for ttnn operations

Operations

  • Layer operators layernorm, conv,softmax were optimized for multi-core computation; model specific operators for Falcon7B were also added.
  • The operator normalize_global was added to the tt_lib.tensor namespace; this transforms the tensor by normalizing elements to the mean and standard deviation of the entire tensor.
  • The operator lamb_optimizer was added to the tt_lib.tensor namespace to help with computing the back-propagation algorithm and weight update for DNN in the training loop.

The following backward operators, for use with back-propagation training loop, have been added to tt_dnn library; they are accessible with suffix _bw in the tt_lib.tensor namespace.

 1. abs
 2. add
 3. addalpha
 4. addcdiv
 5. addcmul
 6. binary_assign
 7. binary_le
 8. clamp
 9. clamp_max
10. clamp_min
11. div
12. exp
13. fill
14. fill_zero
15. gt
16. log
17. lt
18. max
19. min
20. mul
21. ne
22. neg
23. relu
24. rsqrt
25. rsub
26. sigmoid
27. sqrt
28. sub
29. tan
30. tanh
31. unary_add
32. unary_assign
33. unary_div
34. unary_mul
35. unary_pow
36. unary_sub
37. where

Models

  • Added ttnn implementation for Roberta, Whisper, T5-small, and flan-T5-small
  • Updated ttnn implementation of Bloom to work with L1 memory, and cleaned up ttnn implementation of BERT
  • Updated Mistral implementation to use tilized tensors and operations
  • Updated VGG model to load pre-tilized weight tensors and use tilized tensors
  • Added benchmarking demo for DistilBert and T5 using SQuAD dataset for question answering

v0.41.0-rc2

13 Jan 21:14
Compare
Choose a tag to compare
v0.41.0-rc2 Pre-release
Pre-release

📦 Uncategorized

  • Add bfp8 in/out support for LN and SMX interleaved, integrate to BERT large bfp8/lofi
  • restart weka service before running model tests
  • Opt LayerNorm with mcast link enabled
  • Split command queue into issue/completion regions and update EnqueueReadBuffer to read from completion queue
  • #4453: Support for lazy execution mode of command queue
  • #4357: set resnet's average inference time as model's inference time
  • #4003: added debug_decorator for pin-pointing mismatches between ttnn and torch
  • re-organize stress test workflows
  • Pytest timeout plugin to allows setting a per test time limit
  • #3934: Enable DPRINT on N300 Device 0
  • add @main when referencing action
  • disable BERT test
  • #4214: Add dprint testing for multi-device
  • #4003: added optimized t5
  • DPRINT test fixes
  • Llk refactor uplift gs
  • #4490: comment out failing check to keep CI green
  • #4003: Adding whisper functional model
  • #4433: part-3 of backward ops for TT Metalium
  • #4003: fixed ttnn unit tests
  • Fix bug for calling move op back to back
  • Abhullar/revert move fix
  • #3960: add support for multichip tracy profiler and ethernet profile dump
  • #4494: add batch size to get_model_config
  • #0: Add output dtype to clone op
  • #0: Update UMD to get cluster descriptor fix to logic deducing closest MMIO device given a device ID
  • Complex ops sweep test fix
  • Create multi-queue-single-device-fast-dispatch-build-and-unit-tests.yaml
  • #4003: Adjust the threshold on whisper for now to get the pipeline green
  • #3951: uplift umd to fix multi pci bug
  • #4352: Develop Bert-tiny model using ttnn
  • #4352: Develop Bert-tiny question answering model using ttnn
  • #3812: Use tilize operators for mistral model
  • #4003: implemented ttnn.Shape using C++. The perf for bert and bloom jumped back to the numbers from last week
  • #4003: implemented ttnn.Tensor using C++
  • #2619: Move dispatch and banking config out of soc desc yamls into new core desc yamls
  • #2470: support reduce max on all dims on WH B0
  • #4529: Fix tracy zone name construction
  • #4003: updated functional_bert to look like other functional models
  • #4003: added ttnn.rms_norm and use it in optimized functional bloom
  • #2470: revert SHAID ca63a16
  • Remove extra call to deallocate input to move op and throw if input cannot be reallocated
  • #4003: added roberta
  • #4148: Add direct and interleaved ring gather tests between devices in a system with multiple WHs
  • #4549: disable failing DPRINT tests
  • Multi-cq
  • #4003: fixed pipelines
  • Add FP32 acc mode to Bmm
  • #0: Update umd
  • #4003: increased the compilation time threshold for some ttnn models
  • #4562: normalize global operator
  • Complex numbers WHB0 fix
  • #4003: pass in DRAM_MEMORY_CONFIG as the default memory_config to from_torch
  • #4346: Uplift and tilize VGG model
  • #4354: Fix embeddings documentation to correctly reflect expected input shape
  • #4003: renamed ttnn.nlp to ttnn.transformer
  • #4538: Workaround GS/WH RAW hazard
  • #4589: Improve process managment in tracy cli capture
  • #4549: Disable dprint tests on E300
  • #0: Fix watcher build for erisc
  • #4490: Fix DPRINT hanging test to not check physical core coords
  • #4136: Fix core XY calculation for watcher on WH
  • #4456: Uplift NlpCreateHeads to support generic shapes
  • #4482: DistilBert gs demo benchmarking
  • #0: add myself to llk_lib codewoners
  • #4407: Benchmark t5 TT model
  • #4263: Add unit test to reproduce segfault
  • #4350: Add test for slow dispatch mode
  • #4003: removed unnecessary reshapes in ttnn.matmul and ttnn.linear
  • #4003: deleted split_heads and split_key_value_and_split_heads
  • #4514: fixed the bug in ttnn.reshape
  • #4003: made ttnn perf compilation and inference times less tight
  • #4445: Split act reads into two RISCs for first convs (ie. K=16 convs)
  • #3003: added sweep test for ttnn ops
  • Revert "#4514: fixed the bug in ttnn.reshape"
  • #0: (MINOR) Upgrade to 0.41.0

v0.40.0

09 Jan 20:01
Compare
Choose a tag to compare

📦 Uncategorized

  • Opt LN_sharded and SMX_sharded
  • #1919: Turn existing allocator tests into gtests
  • Agrebenisan/fd perf opt
  • #3932: Rename unary op args which were input_a -> input, binary ops from input, other -> input_a, input_b
  • #3971: Fix TSLICE printing truncation when hitting MAX_COUNT
  • #0: Fix undefined variable error when running with watcher
  • #4141: Add GetPreferredNOCForDRAMRead, GetPreferredNOCForDRAMWrite and update all ops to use these apis
  • #3420: fix eth core init L1 bug
  • #0: Add ttnn founding engineers as CODEOWNERS of functional models
  • #0: Commonize logic between E2E and device perf functions/scripts. Enable assertions for device perf scripts/ci
  • Issue 4073: Fix for host-side hanging when an invalid DPRINT WAIT command is running on the device.
  • #0: Add tt-rkim as CODEOWNERS for setup_hugepages.py
  • #4003: implemented functional t5 model
  • #3003: commonized variable names across tnn tests. Removed ttnn.experimental. Added ttnn.unary and commonized the import of ttl unary ops
  • #0: Delete extra text in first docs page about being added to repo
  • write watcher log to built/ folder rather than kernel subfolder
  • Add Batch>1 fix for matmul blocking API
  • #4231: improve unary add, sub, mul and div implementation in SFPU. Add complex polar operator
  • #3493: sharded tensor support
  • REVERT #4231: Fine-tune the unary ops to improve performance
  • #0: Move setup_hugepages.py to release assets
  • #0: (MINOR) Update VERSION to 0.40.0
  • #4301: Fix link to announcements in README
  • #4301: Replace some more instances of Metal w/ Metalium in docs
  • Llk refactor uplift
  • #0: Fix TT-Metalium docs link in get_performance.rst
  • #0: uplift in device code
  • #4176: uplift umd plus tt_metal changes
  • init fw once
  • Merge v2 of untilize_with_halo, maxpool, and conv ops for Resnet-50
  • Backward ops for Metalium - part-2
  • #4211: Assert that hugepages number is greater than or equal to required, rather than equal to
  • Update resnet readme
  • Add Run Instructions for BERT_large sharded in readme
  • Add batch 20 for resnet-50
  • #4376: Support mixed precision for eltwise binary with prescaling
  • Increase timeout of slow dispatch unit tests and switch to Y_M_D format for ops logs
  • #0: point umd to main, comestic change
  • New tilize and straightforward vec gen in matmul kernel examples
  • #4216: Enable DPrint slow dispatch testing
  • #4376: Call llk reconfig functions in compute kernel apis for WH
  • #4336: #4386: Fix interleaved_to_sharded writer waiting on incorrect amount of data for uneven shards
  • #1433: removed Device* and MemoryConfig from DeviceStorage
  • #0: Increase fast dispatch post commit timeout and shorten full regressions because we no longer need that much time
  • #4003: added ttnn.mean, ttnn.rsqrt and ttnn.pow and deleted and got rid of ttl use in ttnn_functional_t5. Updated ttnn.Tensor to store shape as ttnn.Shape
  • Aliu/load base erisc
  • #4399: add spell checker script for docs spellchecking
  • #2134: Uplift UMD
  • #0: fix memory leaks found in test_sfpu via valgrind
  • Revert "#4399: add spell checker script spellcheck.sh should be read…
  • #0: update llk.rst for minor ReST syntax
  • #2934: Make one CommandQueue and one HW CommandQueue (SysmemWriter) per device
  • #4003: convert ttl.tennsor.Shape to tuple when using it in torch functions
  • #4211: Fix HP targeting issues in main from cq-per-device changes

v0.39.0

12 Dec 15:57
Compare
Choose a tag to compare

📦 Uncategorized

  • #0: Add extra sentence about use cases in somewhat vague terms
  • #3824: cache weight tensors for mistral
  • Npetrovic/power fp sweep
  • #3918: Fix falcon7b perf profiling & add support to load weights from HF when weka is not mounted
  • Rename KernelID -> KernelHandle and CircularBufferID -> CBHandle
  • Aliu/erisc cleanup
  • #3003: ttnn program logging
  • Watcher output/doc tweaks
  • #4014: added support for uint16 datatype
  • #4000: Add links to demo folders in note in first 5 things
  • #3751: Fix sfpu load/store of ints
  • enable watcher for stress test actions
  • #3058: Give first pass at flattening build by getting rid of tt-metal intermediate libs
  • Revert "#3058: Give first pass at flattening build by getting rid of …
  • #3219: Added host functions which tilize and untilize bfloat16 vectors
  • stress test machine config update
  • #0: update to use concat on device
  • #3895: ttnn functional optimized Bert
  • #4014: Fix bug with packing uint16 datatype
  • #3824: move mistral embedding weights to weka
  • #3978: Fix readme to instruct running pytest without warnings
  • Dma/3467 dprint cleanup
  • #0: identity operator for comparison of SFPU ops
  • #3058: Add tracy back into build and test with ENABLE_TRACY=1
  • #3979: Add support for ResNet for weka unmounted machines to download ImageNet
  • #3990: Remove DPRINT SETW sticky bit
  • #4041: Add moreh_layernorm op
  • #4044: Add moreh_softmax, moreh_softmin ops
  • #3103: profile the SFPU operators
  • #0: function typo fix
  • #3211: bug in WH B0 - sum along dim3
  • Implementation for Bert Sharded Batch 12
  • #4069: Avoid reading out of bounds in the hugepage
  • #4014: Add testing for uint16 and uint32 on device
  • #0: Disable TestPrintRaiseWait gtest until a fix for nondet issue is in
  • Move hugepages section and refer to public syseng instructions for accelerator-level dependencies
  • #4055: non-deterministic test_pow_fractional PCC error with watcher enabled
  • #0: update test_sfpu and profiling conflict
  • #4043: Add discord link to docs support page + README
  • Noc on erisc
  • #3894: backward ops for tt-metal
  • #3972: Update tracy and device-side profiler docs
  • #4085: update seed value and re-verify the reported bug
  • #2860: Init one UMD per MMIO device ID and the remote devices it controls
  • #4074: Add opened, reopened, synchronize pull_request triggers (default) for static checks pipeline
  • #0: Ignore /device, not device/ in .gitignore
  • #4074: Add wording to CONTRIBUTING.md to be open to future forks + to discourage clogging up pipelines with too many PRs
  • #4053: Upgrade driver from 1.23 to 1.26 in release assets from syseng
  • #4065: Update pinned python3.8-venv to 20.04.9 because 20.04.8 is gone
  • #4096: Fix issue with DPRINT server closing too early for some WAITs
  • #4053: Add chmod ugo+x step in ansible scripts for copying over script assets
  • #4109: ttnn examples.rst needs update
  • #4158: support full repeat interleave developed for Mistral
  • #4076: Add instructions for execution for programming_examples and fix one typo
  • #0: (MINOR) Bump minor to v0.39.0
  • #4053: Get rid of FW labels for silicon runner targets
  • #3752: update ttnn tutorials and make them more descriptive
  • #3994: Add bfloat16 dtype to sweep tests
  • #0: update ownership for SFPU ops profiler, and Backward ops code
  • #3420: move init erisc info to clear l1 call
  • #3918: Add falcon caching support
  • #4125: Refactor tests for backward ops
  • Perf bloom
  • #4121: Unset TT_METAL_SLOW_DISPATCH_MODE when empty string in yaml. R…
  • #4079: Remove dprints from op kernels
  • #4176: uplift umd to include create-eth-map fixes
  • #4017: Replace static device APIs to query num available devices and num availale pcie devices with standalone host APIs
  • Fixup some error messages
  • Rework build system
  • #4228: Revert umd change to see if seg faults go away
  • #4003: use if-else instead of try-except in ttnn.reshape and ttnn.permute
  • #4003: updated ttnn.model_preprocessing to keep the structure of the model weights
  • #0: Changing name for major places from Metal to Metalium
  • #4186: Move all assets except for setup_hugepages.py to internal workflows
  • #4003: run test_performance_of_bloom_for_question_answering using L1 Config and assuming fused softmax
  • #3003: updated ttnn tests

v0.38.0

24 Nov 19:50
Compare
Choose a tag to compare

📦 Uncategorized

  • #3820: Trunc fallback op
  • #3703: Support power with non integer exponent: tt_lib.tensor.power_fp
  • #308: Add a new test for coverage of previous issue with dprinting float consts from ncrisc
  • #0: Update UMD submdoule and add cluster wrapper fof get_pcie_base_addr_from_device
  • ttnn - added Bert
  • Remove asserts and enable lto for release builds
  • #2220: Use new UMD apis to get PCIe address ranges
  • #3814: Use UMD fast write path to update the CQ write pointer, clean up the names of the write/read core APIs so they do not reference DRAM
  • #0: Fix the repeat interleave doc
  • #3003: use log_debug instead of log_info for logging operations
  • Revert "#2220: Use new UMD apis to get PCIe address ranges"
  • Update get_started.rst
  • #0: Remove kkwong from CODEOWNERS
  • #0: Fix scatter op
  • #3829: Add new void* enqueue apis
  • #2516: Remove datacopy into uint32_t vector now that we have void* apis
  • #3640: eltwise binary op perf optimzation
  • #0: Fix microbenchmark csv artifact path
  • #3568: Move weigths dtype from bfloat16 to bfp8 in mistral model
  • Fix SPDX headers to be machine readable
  • #3804: Split device perf job into separate workflow from E2E perf
  • #0: Update untilizewithunpad to support some cases of unpadding width in width sharding
  • #2498: Upload syseng assets as part of release
  • #0: (MINOR) Update to v0.38.0
  • #2498: Revert "#2498: REVERT ME - test out release pipeline without r…
  • Update llama-2 version
  • #3566: support mistral model for generic batch size
  • #3718: Link multicasts that use the same path to avoid multiple path reservations in a row
  • remove UpdateRuntimeArg
  • #3704: Increase size of trisc1 code hole for now
  • Doc update for EnqueueReadBuffer
  • Env variable cleanup
  • Documenting Compute Kernels API Sprint
  • #3647: Add fix for test for polyval coeffs generation
  • #0: mistral code refactor and reuse variables
  • Codeowners update
  • #3914: Apply scatter for mistral model
  • Rewrote ttnn_optimized_multi_head_attention using only ttnn operations
  • Update models' landing page
  • #3904: First docs changes for Project Grayskull
  • Adding compute kernel api docs for untilize, tilize, unpack, tile_move_copy and reg_api
  • document compute_kernel_api/matmul.h, compute_kernel_api/pack.h, and compute_kernel_api/bcasth.h
  • #3887: repeat operator implementation
  • restrict my ownership to host API docs only
  • #0: update profiling for unary ops
  • #2220: Redo use new UMD apis to get PCIe address ranges
  • Merge latest resnet optimizations
  • Add support for eth kernels full stack
  • #0: Update docs on device side profiler
  • #3913: Update mem config for the mistral modules
  • #3003: updated links to steps 3 and 4 of getting started
  • #3830: Fix CB failures in perf pipelines
  • #0: enable test for wormhole, use eps from device
  • #3003: Adding ttnn_functional_bloom
  • #3926: refactored run_device_operation to commonize the logic of runn…
  • #0: add --tile-factor, --use-L1, --use-DRAM, or --help options
  • Moreh Matmul Op