dustbunny 2 hours ago

What interests me most by zig is the ease of the build system, cross compilation, and the goal of high iteration speed. I'm a gamedev, so I have performance requirements but I think most languages have sufficient performance for most of my requirements so it's not the #1 consideration for language choice for me.

I feel like I can write powerful code in any language, but the goal is to write code for a framework that is most future proof, so that you can maintain modular stuff for decades.

C/C++ has been the default answer for its omnipresent support. It feels like zig will be able to match that.

  • raincole 13 minutes ago

    I wonder how zig works on consoles. Usually consoles hate anything that's not C/C++. But since zig can be transpiled to C, perhaps it's not completely ruled out?

  • FlyingSnake 2 hours ago

    I recently, for fun, tried running zig on an ancient kindle device running stripped down Linux 4.1.15.

    It was an interesting experience and I was pleasantly surprised by the maturity of Zig. Many things worked out of the box and I could even debug a strange bug using ancient GDB. Like you, I’m sold on Zig too.

    I wrote about it here: https://news.ycombinator.com/item?id=44211041

  • osigurdson an hour ago

    I've dabbled in Rust, liked it, heard it was bad so kind of paused. Now trying it again and still like it. I don't really get why people hate it so much. Ugly generics - same thing in C# and Typescript. Borrow checker - makes sense if you have done low level stuff before.

el_pollo_diablo 8 hours ago

> In fact, even state-of-art compilers will break language specifications (Clang assumes that all loops without side effects will terminate).

I don't doubt that compilers occasionally break language specs, but in that case Clang is correct, at least for C11 and later. From C11:

> An iteration statement whose controlling expression is not a constant expression, that performs no input/output operations, does not access volatile objects, and performs no synchronization or atomic operations in its body, controlling expression, or (in the case of a for statement) its expression-3, may be assumed by the implementation to terminate.

  • tialaramex 7 hours ago

    C++ says (until the future C++ 26 is published) all loops, but as you noted C itself does not do this, only those "whose controlling expression is not a constant expression".

    Thus in C the trivial infinite loop for (;;); is supposed to actually compile to an infinite loop, as it should with Rust's less opaque loop {} -- however LLVM is built by people who don't always remember they're not writing a C++ compiler, so Rust ran into places where they're like "infinite loop please" and LLVM says "Aha, C++ says those never happen, optimising accordingly" but er... that's the wrong language.

    • kibwen 5 hours ago

      > Rust ran into places where they're like "infinite loop please" and LLVM says "Aha, C++ says those never happen, optimising accordingly" but er... that's the wrong language

      Worth mentioning that LLVM 12 added first-class support for infinite loops without guaranteed forward progress, allowing this to be fixed: https://github.com/rust-lang/rust/issues/28728

      • loeg an hour ago

        For some context, 12 was released in April 2021. LLVM is now on 20 -- the versions have really accelerated in recent years.

    • el_pollo_diablo 5 hours ago

      Sure, that sort of language-specific idiosyncrasy must be dealt with in the compiler's front-end. In TFA's C example, consider that their loop

        while (i <= x) {
            // ...
        }
      
      just needs a slight transformation to

        while (1) {
            if (i > x)
                break;
            // ...
        }
      
      and C11's special permission does not apply any more since the controlling expression has become constant.

      Analyzes and optimizations in compiler backends often normalize those two loops to a common representation (e.g. control-flow graph) at some point, so whatever treatment that sees them differently must happen early on.

      • pjmlp 4 hours ago

        In theory, in practice it depends on the compiler.

        It is no accident that there is ongoing discussion that clang should get its own IR, just like it happens with the other frontends, instead of spewing LLVM IR directly into the next phase.

uecker 9 hours ago

You don't really need comptime to be able to inline and unroll a string comparison. This also works in C: https://godbolt.org/z/6edWbqnfT (edit: fixed typo)

  • Retro_Dev 9 hours ago

    Yep, you are correct! The first example was a bit too simplistic. A better one would be https://github.com/RetroDev256/comptime_suffix_automaton

    Do note that your linked godbolt code actually demonstrates one of the two sub-par examples though.

    • uecker 8 hours ago

      I haven't looked at the more complex example, but the second issue is not too difficult to fix: https://godbolt.org/z/48T44PvzK

      For complicated things, I haven't really understood the advantage compared to simply running a program at build time.

      • Cloudef 8 hours ago

        To be honest your snippet isn't really C anymore by using a compiler builtin. I'm also annoyed by things like `foo(int N, const char x[N])` which compilation vary wildly between compilers (most ignore them, gcc will actually try to check if the invariants if they are compile time known)

        > I haven't really understood the advantage compared to simply running a program at build time.

        Since both comptime and runtime code can be mixed, this gives you a lot of safety and control. The comptime in zig emulates the target architecture, this makes things like cross-compilation simply work. For program that generates code, you have to run that generator on the system that's compiling and the generator program itself has to be aware the target it's generating code for.

        • uecker 7 hours ago

          It also works with memcpy from the library: https://godbolt.org/z/Mc6M9dK4M I just didn't feel like burdening godbolt with an inlclude.

          I do not understand your criticism of [N]. This gives compiler more information and catches errors. This is a good thing! Who could be annoyed by this: https://godbolt.org/z/EeadKhrE8 (of course, nowadays you could also define a descent span type in C)

          The cross-compilation argument has some merit, but not enough to warrant the additional complexity IMHO. Compile-time computation will also have annoying limitations and makes programs more difficult to understand. I feel sorry for everybody who needs to maintain complex compile time code generation. Zig certainly does it better than C++ but still..

          • Cloudef 7 hours ago

            > I do not understand your criticism of [N]. This gives compiler more information and catches errors. This is a good thing!

            It only does sane thing in GCC, in other compilers it does nothing and since it's very underspec'd it's rarely used in any C projects. It's shame Dennis's fat pointers / slices proposal was not accepted.

            > warrant the additional complexity IMHO

            In zig case the comptime reduces complexity, because it is simply zig. It's used to implement generics, you can call zig code compile time, create and return types.

            This old talk from andrew really hammers in how zig is evolution of C: https://www.youtube.com/watch?v=Gv2I7qTux7g

            • uecker 7 hours ago

              Then the right thing would be to complain about those other compilers. I agree that Dennis' fat pointer proposal was good.

              Also in Zig it does not reduce complexity but adds to it by creating an distinction between compile time and run-time. It is only lower complexity by comparing to other implementations of generic which are even worse.

              • pron 3 hours ago

                C also creates a distinction between compile-time and run-time, which is more arcane and complicated than that of Zig's, and your code uses it, too: macros (and other pre-processor programming). And there are other distinctions that are more subtle, such as whether the source of a target function is available to the caller's compilation unit or not, static or not etc..

                C only seems cleaner and simpler if you already know it well.

                • uecker 3 hours ago

                  My point is not about whether compile-time programming is simpler in C or in Zig, but that is in most cases the wrong solution. My example is also not about compile time programming (and does not use macro: https://godbolt.org/z/Mc6M9dK4M), but about letting the optimizer do its job. The end result is then leaner than attempting to write a complicated compile time solution - I would argue.

                  • pyrolistical 2 hours ago

                    Right tool for the job. There was no comptime problem shown in the blog.

                    But if there were zig would prob be simpler since it uses one language that seamlessly weaves comptime and runtime together

                    • uecker 2 hours ago

                      I don't know, to me it seems the blog tries to make the case that comptime is useful for low-level optimization: "Is this not amazing? We just used comptime to make a function which compares a string against "Hello!\n", and the assembly will run much faster than the naive comparison function. It's unfortunately still not perfect." But it turns out that a C compiler will give you the "perfect" code directly while the comptime Zig version is fairly complicated. You can argue that this was just a bad example and that there are other examples where comptime makes more sense. The thing is, about two decades ago I was similarly excited about expression-template libraries for very similar reasons. So I can fully understand how the idea of "seamlessly weaves comptime and runtime together" can appear cool. I just realized at some point that it isn't actually all that useful.

              • pjmlp 4 hours ago

                Not only it was a good proposal, since 1990 that WG14 has not done anything else into that sense, and doesn't look like it ever will.

                • uecker an hour ago

                  Let's see. We have a relatively concrete plan to add dependent structure types to C2Y: struct foo { size_t n; char (buf)[.n]; };

                  Once we have this, the wide pointer could just be introduced as syntactic sugar for this. char (buf)[:] = ..

                  Personally, I would want the dependent structure type first as it is more powerful and low-level with no need to decide on a new ABI.

              • Cloudef 7 hours ago

                Sure there's tradeoffs for everything, but if I had to choose between macros, templates, or zig's comptime, I'd take the comptime any time.

                • uecker 6 hours ago

                  To each their own, I guess. I still find C to be so much cleaner than all the languages that attempt to replace it, I can not possibly see any of them as a future language for me. And it turns out that it is possible to fix issues in C if one is patient enough. Nowadays I would write this with a span type: https://godbolt.org/z/nvqf6eoK7 which is safe and gives good code.

                  update: clang is even a bit nicer https://godbolt.org/z/b99s1rMzh although both compile it to a constant if the other argument is known at compile time. In light of this, the Zig solution does not impress me much: https://godbolt.org/z/1dacacfzc

          • quibono 6 hours ago

            Possibly a stupid question... what's a descent span type?

            • uecker 5 hours ago

              Something like this: https://godbolt.org/z/er9n6ToGP It encapsulates a pointer to an array and a length. It is not perfect because of some language limitation (which I hope we can remove), but also not to bad. One limitation is that you need to pass it a typedef name instead of any type, i.e. you may need a typedef first. But this is not terrible.

              • quibono 4 hours ago

                Thanks, this is great! I've been having a look at your noplate repo, I really like what you're doing there (though I need a minute trying to figure out the more arcane macros!)

                • uecker 3 hours ago

                  In this case, the generic span type is just #define span(T) struct CONCAT(span_, T) { ssize_t N; T* data; } And the array to span macro would just create such an object form an array by storing the length of the array and the address of the first element. #define array2span(T, x) ({ auto __y = &(x); (span(T)){ array_lengthof(__y), &(__y)[0] }; })

saagarjha 10 hours ago

> As an example, consider the following JavaScript code…The generated bytecode for this JavaScript (under V8) is pretty bloated.

I don't think this is a good comparison. You're telling the compiler for Zig and Rust to pick something very modern to target, while I don't think V8 does the same. Optimizing JITs do actually know how to vectorize if the circumstances permit it.

Also, fwiw, most modern languages will do the same optimization you do with strings. Here's C++ for example: https://godbolt.org/z/TM5qdbTqh

  • vanderZwan 8 hours ago

    In general it's a bit of an apples to fruit salad comparison, albeit one that is appropriate to highlight the different use-cases of JS and Zig. The Zig example uses an array with a known type of fixed size, the JS code is "generic" at run time (x and y can be any object). Which, fair enough, is something you'd have to pay the cost for in JS. Ironically though in this particular example one actually would be able to do much better when it comes to communicating type information to the JIT: ensure that you always call this function with Float64Arrays of equal size, and the JIT will know this and produce a faster loop (not vectorized, but still a lot better).

    Now, one rarely uses typed arrays in practice because they're pretty heavy to initialize so only worth it if one allocates a large typed array one once and reuses them a lot aster that, so again, fair enough! One other detail does annoy me a little bit: the article says the example JS code is pretty bloated, but I bet that a big part of that is that the JS JIT can't guarantee that 65536 equals the length of the two arrays so will likely insert a guard. But nobody would write a for loop that way anyway, they'd write it as i < x.length, for which the JIT does optimize at least one array check away. I admit that this is nitpicking though.

  • Retro_Dev 9 hours ago

    You can change the `target` in those two linked godbolt examples for Rust and Zig to an older CPU. I'm sorry I didn't think about the limitations of the JS target for that example. As for your link, It's a good example of what clang can do for C++ - although I think that the generated assembly may be sub-par, even if you factor in zig compiling for a specific CPU here. I would be very interested to see a C++ port of https://github.com/RetroDev256/comptime_suffix_automaton though. It is a use of comptime that can't be cleanly guessed by a C++ compiler.

    • saagarjha 9 hours ago

      I just skimmed your code but I think C++ can probably constexpr its way through. I understand that's a little unfair though because C++ is one of the only other languages with a serious focus on compile-time evaluation.

timewizard 16 minutes ago

That for loop syntax is horrendous.

So I have two lists, side by side, and the position of items in one list matches positions of items in the other? That just makes my eyes hurt.

I think modern languages took a wrong turn by adding all this "magic" in the parser and all these little sigils dotted all around the code. This is not something I would want to look at for hours at a time.

KingOfCoders 10 hours ago

I do love the allocator model of Zig, I would wish I could use something like an request allocator in Go instead of GC.

  • usrnm 10 hours ago

    Custom allocators and arenas are possible in go and even do exist, but they ara just very unergonomic and hard to use properly. The language itself lacks any way to express and enforce ownership rules, you just end up writing C with a slightly different syntax and hoping for the best. Even C++ is much safer than go without GC

    • KingOfCoders 9 hours ago

      They are not integrated in all libraries, so for me they don't exist.

flohofwoe 10 hours ago

> I love Zig for it's verbosity.

I love Zig too, but this just sounds wrong :)

For instance, C is clearly too sloppy in many corners, but Zig might (currently) swing the pendulum a bit too far into the opposite direction and require too much 'annotation noise', especially when it comes to explicit integer casting in math expressions (I wrote about that a bit here: https://floooh.github.io/2024/08/24/zig-and-emulators.html).

When it comes to performance: IME when Zig code is faster than similar C code then it is usually because of Zig's more aggressive LLVM optimization settings (e.g. Zig compiles with -march=native and does whole-program-optimization by default, since all Zig code in a project is compiled as a single compilation unit). Pretty much all 'tricks' like using unreachable as optimization hints are also possible in C, although sometimes only via non-standard language extensions.

C compilers (especially Clang) are also very aggressive about constant folding, and can reduce large swaths of constant-foldable code even with deep callstacks, so that in the end there often isn't much of a difference to Zig's comptime when it comes to codegen (the good thing about comptime is of course that it will not silently fall back to runtime code - and non-comptime code is still of course subject to the same constant-folding optimizations as in C - e.g. if a "pure" non-comptime function is called with constant args, the compiler will still replace the function call with its result).

TL;DR: if your C code runs slower than your Zig code, check your C compiler settings. After all, the optimization heavylifting all happens down in LLVM :)

  • messe 10 hours ago

    With regard to the casting example, you could always wrap the cast in a function:

        fn signExtendCast(comptime T: type, x: anytype) T {
            const ST = std.meta.Int(.signed, @bitSizeOf(T));
            const SX = std.meta.Int(.signed, @bitSizeOf(@TypeOf(x)));
            return @bitCast(@as(ST, @as(SX, @bitCast(x))));
        }
    
        export fn addi8(addr: u16, offset: u8) u16 {
            return addr +% signExtendCast(u16, offset);
        }
    
    This compiles to the same assembly, is reusable, and makes the intent clear.
    • flohofwoe 10 hours ago

      Yes, that's a good solution for this 'extreme' example. But in other cases I think the compiler should make better use of the available information to reduce 'redundant casting' when narrowing (like the fact that the result of `a & 15` is guaranteed to fit into an u4 etc...). But I know that the Zig team is aware of those issues, so I'm hopeful that this stuff will improve :)

      • hansvm 6 hours ago

        This is something I used to agree with, but implicit narrowing is dangerous, enough so that I'd rather be more explicit most of the time nowadays.

        The core problem is that you're changing the semantics of that integer as you change types, and if that happens automatically then the compiler can't protect you from typos, vibe-coded defects, or any of the other ways kids are generating almost-correct code nowadays. You can mitigate that with other coding patterns (like requiring type parameters in any potentially unsafe arithmetic helper functions and banning builtins which aren't wrapped that way), but under the swiss cheese model of error handling it still massively increases your risky surface area.

        The issue is more obvious on the input side of that expression and with a different mask. E.g.:

          const a: u64 = 42314;
          const even_mask: u4 = 0b0101;
          a & even_mask;
        
        Should `a` be lowered to a u4 for the computation, or `even_mask` promoted, or however we handle the internals have the result lowered sometimes to a u4? Arguably not. The mask is designed to extract even bit indices, but we're definitely going to only extract the low bits. The only safe instance of implicit conversion in this pattern is when you intend to only extract the low bits for some purpose.

        What if `even_mask` is instead a comptime_int? You still have the same issue. That was a poor use of comptime ints since now that implicit conversion will always happen, and you lost your compiler errors when you misuse that constant.

        Back to your proposal of something that should always be safe: implicitly lowering `a & 15` to a u4. The danger is in using it outside its intended context, and given that we're working with primitive integers you'll likely have a lot of functions floating around capable of handling the result incorrectly, so you really want to at least use the _right_ integer type to have a little type safety for the problem.

        For a concrete example, code like that (able to be implicitly lowered because of information obvious to the compiler) is often used in fixed-point libraries. The fixed-point library though does those sorts of operations with the express purpose of having zeroed bits in a wide type to be able to execute operations without loss of precision (the choice of what to do for the final coalescing of those operations when precision is lost being a meaningful design choice, but it's irrelevant right this second). If you're about to do any nontrivial arithmetic on the result of that masking, you don't want to accidentally put it in a helper function with a u4 argument, but with implicit lowering that's something that has no guardrails. It requires the programmer to make zero mistakes.

        That example might seem a little contrived, and this isn't something you'll run into every day, but every nontrivial project I've worked on has had _something_ like that, where implicit narrowing is extremely dangerous and also extremely easy to accidentally do.

        What about the verbosity? IMO the point of verbosity is to draw your attention to code that you should be paying attention to. If you're in a module where implicit casting would be totally fine, then make a local helper function with a short name to do the thing you want. Having an unsafe thing be noisy by default feels about right though.

        • throwawaymaths 3 hours ago

          you could give the wrapper function a funny name like @"sign-cast" to force the eye to be drawn to it.

    • johnisgood 10 hours ago

      Yeah but what is up with all that "." and "@"? Yes, I know what they are used for, but it is noise for me (i.e. "annotation noise"). This is why I do not use Zig. Zig is more like a lighter C++, not a C replacement, IMO.

      I agree with everything flohofwoe said, especially this: "C is clearly too sloppy in many corners, but Zig might (currently) swing the pendulum a bit too far into the opposite direction and require too much 'annotation noise', especially when it comes to explicit integer casting in math expressions ".

      Seems like I will keep using Odin and give C3 a try (still have yet to!).

      Edit: I quite dislike that the downvote is used for "I disagree, I love Zig". sighs. Look at any Zig projects, it is full of annotation noise. I would not want to work with a language like that. You might, that is cool. Good for you.

      • pjmlp 4 hours ago

        Despite all bashes that I do at C, I would be happy if during the last 40 years we had gotten at least fat pointers, official string and array vocabulary types (instead of everyone getting their own like SDS and glib), namespaces instead of mylib_something, proper enums (like enum class in C++, enums in C# and so forth), fixing the pointer decay from array to &array[0], less UB.

        While Zig fixes some of these issues, the amount of @ feels like being back in Objective-C land and yeah too many uses of dot and starts.

        Then again, I am one of those that actually enjoys using C++, despite all its warts and the ways of WG21 nowadays.

        I also dislike the approach with source code only libraries and how importing them feels like being back in JavaScript CommonJS land.

        Odin and C3 look interesting, the issue is always what is going to be the killer project, that makes reaching for those alternatives unavoidable.

        I might not be a language XYZ cheerleeder, but occasionally do have to just get my hands dirty and do the needfull for an happy customer, regardlees of my point of view on XYZ.

      • throwawaymaths 3 hours ago

        the line noise is really ~only there for dangerous stuff, where slowing down a little bit (both reading and writing) is probably a good idea.

        as for the dots, if you use zig quite a bit you'll see that dot usage is incredibly consistent, and not having the dots will feel wrong, not just in an "I'm used to it sense/stockholm syndrome" but you will feel for example that C is wrong for not having them.

        for example, the use of dot to signify "anonymous" for a struct literal. why doesn't C have this? the compiler must make a "contentious" choice if something is a block or a literal. by contentious i mean the compiler knows what its doing but a quick edit might easily make you do something unexpected

      • codethief 9 hours ago

        > Yeah but what is up with all that "." and "@"

        "." = the "namespace" (in this case an enum) is implied, i.e. the compiler can derive it from the function signature / type.

        "@" = a language built-in.

        • johnisgood 9 hours ago

          I know what these are, but they are noise to me.

          • kprotty an hour ago

            C++'s `::` vs Zig's `.`

            C++'s `__builtin_` (or arguably `_`/`__`) vs Zig's `@`

          • pyrolistical 2 hours ago

            It is waaaaaaay less noisy than c++

            C syntax may look simpler but reading zig is more comfy bc there is less to think about than c due to explicit allocator.

            There is no hidden magic with zig. Only ugly parts. With c/c++ you can hide so much complexity in a dangerous way

          • Simran-B 2 hours ago

            It's not annotation noise however, it's syntax noise.

  • skywal_l 10 hours ago

    Maybe with the new x86 backend we might see some performance differences between C and Zig that could definitely be attributed solely to the Zig project.

    • saagarjha 10 hours ago

      I would be (pleasantly) surprised if Zig could beat LLVM's codegen.

      • Zambyte 5 hours ago

        So would the Zig team. AFAIK, they don't plan to (and have said this in interviews). The plan is for super fast compilation and incremental compilation. I think the homegrown backend is mainly for debug builds.

  • Retro_Dev 10 hours ago

    Ahh perhaps I need to clarify:

    I don't love the noise of Zig, but I love the ability to clearly express my intent and the detail of my code in Zig. As for arithmetic, I agree that it is a bit too verbose at the moment. Hopefully some variant of https://github.com/ziglang/zig/issues/3806 will fix this.

    I fully agree with your TL;DR there, but would emphasize that gaining the same optimizations is easier in Zig due to how builtins and unreachable are built into the language, rather than needing gcc and llvm intrinsics like __builtin_unreachable() - https://gcc.gnu.org/onlinedocs/gcc-4.5.0/gcc/Other-Builtins....

    It's my dream that LLVM will improve to the point that we don't need further annotation to enable positive optimization transformations. At that point though, is there really a purpose to using a low level language?

    • matu3ba 9 hours ago

      > LLVM will improve to the point that we don't need further annotation to enable positive optimization transformations

      That is quite a long way to go, since the following formal specs/models are missing to make LLVM + user config possible:

      - hardware semantics, specifically around timing behavior and (if used) weak memory

      - memory synchronization semantics for weak memory systems with ideas from “Relaxed Memory Concurrency Re-executed” and suggested model looking promising

      - SIMD with specifically floating point NaN propagation

      - pointer semantics, specifically in object code (initialization), se- and deserialization, construction, optimizations on pointers with arithmetic, tagging

      - constant time code semantics, for example how to ensure data stays in L1, L2 cache and operations have constant time

      - ABI semantics, since specifications are not formal

      LLVM is also still struggling with full restrict support due to architecture decisions and C++ (now worked on since more than 5 years).

      > At that point though, is there really a purpose to using a low level language?

      Languages simplify/encode formal semantics of the (software) system (and system interaction), so the question is if the standalone language with tooling is better than state of art and for what use cases. On the tooling part with incremental compilation I definitely would say yes, because it provides a lot of vertical integration to simplify development.

      The other long-term/research question is if and what code synthesis and formal method interaction for verification, debugging etc would look like for (what class of) hardware+software systems in the future.

      • eptcyka 6 hours ago

        For constant time code, it doesn’t matter too much if data spills out of a cache, constant time issues arise from compilers introducing early exits which leaves crypto open to timing attacks.

    • flohofwoe 10 hours ago

      Yeah indeed. Having access to all those 'low-level tweaks' without having to deal with non-standard language extensions which are different in each C compiler (if supported at all) is definitely a good reason to use Zig.

      One thing I was wondering, since most of Zig's builtins seem to map directly to LLVM features, if and how this will affect the future 'LLVM divorce'.

      • Retro_Dev 10 hours ago

        Good question! The TL;DR as I understand it is that it won't matter too much. For example, the self-hosted x86_64 backend (which is coincidentally becoming default for debugging on linux right now - https://github.com/ziglang/zig/pull/24072) has full support for most (all?) builtins. I don't think that we need to worry about that.

        It's an interesting question about how Zig will handle additional builtins and data representations. The current way I understand it is that there's an additional opt-in translation layer that converts unsupported/complicated IR to IR which the backend can handle. This is referred to as the compiler's "Legalize" stage. It should help to reduce this issue, and perhaps even make backends like https://github.com/xoreaxeaxeax/movfuscator possible :)

  • knighthack 9 hours ago

    I'm not sure why allowances are made for Zig's verbosity, but not Go's.

    What's good for the goose should be good for the gander.

    • Zambyte 5 hours ago

      FWIW Zig has error handling that is nearly semantically identical to Go (errors as return values, the big semantic difference being tagged unions instead of multiple return values for errors), but wraps the `if err != nil { return err}` pattern in a single `try` keyword. That's the verbosity that I see people usually complaining about in Go, and Zig addresses it.

      • kbolino 4 hours ago

        The way Zig addresses it also discards all of the runtime variability too. In Go, an error can say something like

            unmarshaling struct type Foo: in field Bar int: failed to parse value "abc" as integer
        
        Whereas in Zig, an error can only say something that's known at compile time, like IntParse, and you will have to use another mechanism (e.g. logging) to actually trace the error.
    • ummonk 8 hours ago

      Zig's verbosity goes hand in hand with a strong type system and a closeness to the hardware. You don't get any such benefits from Go's verbosity.

    • nurbl 9 hours ago

      I think a better word may be "explicitness". Zig is sometimes verbose because you have to spell things out. Can't say much about Go, but it seems it has more going on under the hood.

justmarc 8 hours ago

Optimization matters, in a huge way. Its effects are compounded by time.

  • sgt 3 hours ago

    Only if the software ends up being used.