• ghh a year ago

    Wow, their example to "clean up the code" does a bit more than just refactoring to make code more readable, it appears to change the output.

    One would have to check the resulting code carefully to see if the meaning is still as originally intended, or replaced with code that is more probable to be correct (but no longer working).

    For instance, it replaces this:

      if dataset == 'animals':
        if dataset == 'turtle':
          x_train, y_train, x_test, y_test = datasets.load_turtles(with_bowtie=False)
        elif dataset = 'formal_turtle':
          x_train, y_train, x_test, y_test = datasets.load_turtles(with_bowtie=True)
      else:
    
    with this:

      if dataset == 'turtle':
          x_train, y_train, x_test, y_test = datasets.load_turtles(with_bowtie=False)
      elif dataset == 'formal_turtle':
          x_train, y_train, x_test, y_test = datasets.load_turtles(with_bowtie=True)
    
    
    The before-code responds to dataset='animals' with `load_turtles(...)` and to dataset='turtle' or 'formal_turtle' with an error; In the after-code this is reversed, although the apparent logic error and the assignment/equals sign error are resolved.
    • Hasnep a year ago

      Actually, the code before does nothing if dataset is set to 'animals', 'turtle' or 'formal_turtle', most of the branches are inaccessible. Also, the extra else clause that raises an error and the line

          elif dataset = 'formal_turtle':
      
      are both invalid syntax.

      I think 'clean up' here means something closer to 'convert this to what I'm trying to write'.

      • _endif_ a year ago

        Agreed, but I have to say data cleaning is actually one of the hardest step, LLMs are simply not there yet.

        It's almost impossible to for LLM to tell all the invalid rows at once since the data cannot be fit into the context window. If we prompt the model to thoroughly do data cleaning, there will be many try-and-fail steps. This happens to me as a human, I clean some rows, expect my program to run with the data, only to find there are more malformed data. LLM cannot get it right for now, actually I see many cases that LLM fails because it wants to convert types (e.g. string to date).

        Based on my experience, the best way is simply to skip the data cleaning step in the planning stage (you can provide feedback asking the tool to not do any steps).

      • IamLoading a year ago

        Unfortunately, this is true with most LLMs.

      • hazrmard a year ago

        Very cool. Something similar is Streamline Analyst [1]. It automates preprocessing and model development from an uploaded dataset.

        This is great for a more technical user (or student) who want to parse generated code and hack away. Instead [1] generates a dashboard app which is faster to use, but sacrifices interpretability.

        [1]: https://github.com/Wilson-ZheLin/Streamline-Analyst

        • PunchTornado a year ago

          looks great, but it is only available in usa.

          • lispisok a year ago

            That's some privacy notice. I think I'll pass

            • _endif_ a year ago

              I feel you, this definitely is a Google-wide issue across products (e.g. https://x.com/levelsio/status/1831840497629065656). These products themselves are worth a try IMHO.

              • postalcoder a year ago

                This and the underlying tweet makes me feel seen.

                Why does developing with Google have to be so hard?

                • nerdponx a year ago

                  How is this different from what literally any other hosted services company does? If anything, Google is being comparatively honest and transparent.

                  • _endif_ a year ago

                    100%. The good part is, if you follow the tweet thread, you can see they are trying to improve things. Hopefully the effort can land and the rest products can follow.

                • warkdarrior a year ago

                  At least they put it up front, instead of burying it on page 14 of some bottom link.