Skip to content

Atbash Cipher should have tests for decoding. #871

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
borderite opened this issue May 4, 2025 · 17 comments · Fixed by #872
Closed

Atbash Cipher should have tests for decoding. #871

borderite opened this issue May 4, 2025 · 17 comments · Fixed by #872

Comments

@borderite
Copy link

The current tests of "Atbash Cipher" in Common Lisp has no tests for the decode function. As the canonical test data has a few of them, I think that they should be implemented.

Copy link

github-actions bot commented May 4, 2025

Hello. Thanks for opening an issue on Exercism 🙂

At Exercism we use our Community Forum, not GitHub issues, as the primary place for discussion. That allows maintainers and contributors from across Exercism's ecosystem to discuss your problems/ideas/suggestions without them having to subscribe to hundreds of repositories.

This issue will be automatically closed. Please use this link to copy your GitHub Issue into a new topic on the forum, where we look forward to chatting with you!

If you're interested in learning more about this auto-responder, please read this blog post.

@github-actions github-actions bot closed this as completed May 4, 2025
@BNAndras
Copy link
Member

BNAndras commented May 4, 2025

The exercise was added ten years ago so it predates the canonical test data. Syncing the test suite would break 130 completions but that’s not too alarming to me.

Given the instructions explicitly reference decoding, an argument can be made that they contradict the provided test suite which only tests encoding. In that case, an instructions append would be helpful if the maintainer doesn’t want to implement the extra tests. The tests.toml file probably should be updated then to record that those decoding tests weren’t added.

@verdammelt
Copy link
Member

I'm going to re-open this issue. Didn't know that atbash has a bunch of tests this track does not implement. I wonder why the configlet sync checking is not highlighting them?

@borderite
Copy link
Author

@BNAndras, @verdammelt It seems that tests.toml contains all tests from the canonical test data. When I ran

bin/configlet sync --tests --update --exercise atbash-cipher

the result was "The atbash-cipher exercise has up-to-date tests!" Anyway, I am working to add the missing decode tests to the atbash-cipher exercise.

@BNAndras
Copy link
Member

BNAndras commented May 4, 2025

Configlet compares the local tests.toml to the upstream problem-specifications one. This system has been in place for a few years, but the local tests.toml reflect the upstream tests that were available at the time plus any additional changes since then. Configlet thinks we’re up to date on the tests here because there haven’t been changes upstream since the tests.toml file was created.

So someone should compare implemented tests vs what’s in each tests.toml to make sure unimplemented tests aren’t falsely included in the toml. I just did that a few months ago for the Emacs Lisp track actually.

@borderite
Copy link
Author

@BNAndras Let me verify if I understood you correctly. Configlet initially creates tests.toml based on the canonical test data available at the time. If some of the tests in the tests.toml file are not implemented, configlet will not know that. Is this correct?

@verdammelt
Copy link
Member

@borderite oh I didn't realize you were implementing the missing tests and I went and did it. Please submit your PR; we can merge yours rather than mine.

@verdammelt verdammelt reopened this May 4, 2025
@borderite
Copy link
Author

@verdammelt If you have done it, please merge yours. It is not a big deal, because my job was almost nothing. The script bin/lisp_exercise_generator.py was amazing. It automatically created new tests with no errors. The only thing I did was to write an example solution, which I would do anyway as a student.

@verdammelt
Copy link
Member

@verdammelt If you have done it, please merge yours. It is not a big deal, because my job was almost nothing. The script bin/lisp_exercise_generator.py was amazing. It automatically created new tests with no errors. The only thing I did was to write an example solution, which I would do anyway as a student.

OK, just allowing you to get the exercise reputation points if you wanted them.

I should try that test generator out sometime... I did the "hard way" (which wasn't very hard at all).

@borderite
Copy link
Author

borderite commented May 5, 2025

@verdammelt I am not very interested in the reputation points very much, as I probably don't understand what they matter for? I used the generator but manually inspected the output, of course. I was very impressed, becuase it worked perfectly (but in this particular case).

@BNAndras
Copy link
Member

BNAndras commented May 5, 2025

Qualifying for monthly or even lifetime insiders at https://exercism.org/insiders is the main benefit I think since you need X amount of rep in a certain time frame. Otherwise, building rep is used to identify potential maintainer candidates semi-regularly. I don’t think building / authoring category rep can’t be used for much else to be honest.

Mentoring category rep on the other hand is used to unlock supermentor status on a given track which gives you a special Discord channel and then access to the track’s representer. That’s used for automated feedback but only if the track has a custom one implemented.

Otherwise, rep is mainly used for the contributing leaderboards (weekly, last 30 days, last year, and all-time). The all-time board is dominated by folks who have been around since the v3 launch though.

@verdammelt
Copy link
Member

@borderite Thank you for noticing and bringing these missing tests to my attention. I went ahead and merged my PR.

@borderite
Copy link
Author

@BNAndras Thanks for the detailed information. I'll keep that in mind.

@borderite
Copy link
Author

@verdammelt You are welcome.

@BNAndras
Copy link
Member

BNAndras commented May 5, 2025

I think the next steps would be to identify the tests.toml files that haven't been updated in the Git history for the last four years. Those are the most likely to contain tests that weren't implemented. Then we just need to go through each of those exercises, cross-reference the implemented tests vs. canonical data, and add the missing tests. We could probably skip any exercises deprecated in the track config since those aren't available on the site unless someone solved them or was in the process of solving them before the deprecation.

@BNAndras BNAndras reopened this May 5, 2025
@BNAndras BNAndras closed this as completed May 5, 2025
@BNAndras
Copy link
Member

BNAndras commented May 5, 2025

Oops, I fat-fingered the open button. I found Pascal's Triangle is missing three tests that are included in the tests.toml so I'll take care of that today.

@verdammelt
Copy link
Member

I think the next steps would be to identify the tests.toml files that haven't been updated in the Git history for the last four years. Those are the most likely to contain tests that weren't implemented. Then we just need to go through each of those exercises, cross-reference the implemented tests vs. canonical data, and add the missing tests. We could probably skip any exercises deprecated in the track config since those aren't available on the site unless someone solved them or was in the process of solving them before the deprecation.

If you are willing: could you make an issue with a list of these exercises that haven't been updated as a checklist - then we can work through them as we have time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants