All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You could break up the csv so that you process it in chunks of 10000 events | makeresults count=50010 | streamstats count as row | search NOT [| inputlookup large.csv | head 10000] | search... See more...
You could break up the csv so that you process it in chunks of 10000 events | makeresults count=50010 | streamstats count as row | search NOT [| inputlookup large.csv | head 10000] | search NOT [| inputlookup large.csv | head 20000 | tail 10000] | search NOT [| inputlookup large.csv | head 30000 | tail 10000] | search NOT [| inputlookup large.csv | head 40000 | tail 10000] | search NOT [| inputlookup large.csv | head 50000 | tail 10000]  
Since we had the same proplem in our company, this was the official splunk support response: The team informed that this known issue is fixed in the upcoming Splunk 9.4.3 release. Since we do not cu... See more...
Since we had the same proplem in our company, this was the official splunk support response: The team informed that this known issue is fixed in the upcoming Splunk 9.4.3 release. Since we do not currently have an ETA for the release, they have suggested a workaround meanwhile,    In the web-features.conf file in the etc/system/local directory, under the stanza [feature:page_migration] add 2 flags -   enable_admin_LDAP-groups_vnext = false enable_authentication_providers_LDAP_vnext = false This should fix the issue
Thanks, I'll keep the optimization in my mind and implement it as soon as my problem is solved.
Hello and thank you for your suggestion   It works pretty well indeed! The only problem, and it's my fault, I forgot to specify that the 1.csv contains CIDRs and not simple IPs... Any ideas? ... See more...
Hello and thank you for your suggestion   It works pretty well indeed! The only problem, and it's my fault, I forgot to specify that the 1.csv contains CIDRs and not simple IPs... Any ideas?   I'd never have thought of the limits of subsearch. How has it worked all these years when we exceeded 10,000 lines years ago?
On the LB they keep saying it's not possible to check for the http status. Is there any way to render the response static?   Luca
Hi @bhupalbobbadi , if I correctly understood your problem, your second SH doesn't send emails, is it correct? did you checked that the connection between this SH and the mail server (using telnet ... See more...
Hi @bhupalbobbadi , if I correctly understood your problem, your second SH doesn't send emails, is it correct? did you checked that the connection between this SH and the mail server (using telnet from the SH to the mail server) is open on the used port? Ciao. Giuseppe
Your subsearch only returns a limited number of rows (possibly only 10000 not the 55000 you were expecting). As an experiment, I tried this | makeresults count=50001 | streamstats count as row | ou... See more...
Your subsearch only returns a limited number of rows (possibly only 10000 not the 55000 you were expecting). As an experiment, I tried this | makeresults count=50001 | streamstats count as row | outputlookup large.csv Followed by | makeresults count=50010 | streamstats count as row | search NOT [| inputlookup large.csv] I still get 40010 rows not the 9 that I "should" get  
Hi @Treize , I don't know why your lookup doesn't correctly run, how many lines there are in your lookups? maybe you supered the limit of 10,000 rows. Then, did you changed the name of the "ip" fi... See more...
Hi @Treize , I don't know why your lookup doesn't correctly run, how many lines there are in your lookups? maybe you supered the limit of 10,000 rows. Then, did you changed the name of the "ip" field in the main search or in the lookups? anyway, your search isn't optimized: it should be better to insert the two subsearches in the main search and declare the field to use in the search using the field command: <my search> NOT ([ | inputlookup 1.csv | fields ip ] OR [ | inputlookup 2.csv | fields ip ] | dedup ip | fields ip | outputlookup append=true override_if_empty=false 2.csv Ciao. Giuseppe  
Hi @Treize , The issue is likely due to subsearch limitations. Subsearches have default limits on the number of results (10,000) and runtime, and your second lookup (55,000 lines) exceeds the defaul... See more...
Hi @Treize , The issue is likely due to subsearch limitations. Subsearches have default limits on the number of results (10,000) and runtime, and your second lookup (55,000 lines) exceeds the default output limit. This means the subsearch [ | inputlookup 2.csv ] might only be returning the first 10,000 IPs, causing the outer search to incorrectly identify IPs beyond that limit as "not found". Use the lookup command instead, which is more efficient and avoids these limits. | dedup ip | lookup 1.csv IP AS ip OUTPUT IP AS found_in_1 | where isnull(found_in_1) | lookup 2.csv IP AS ip OUTPUT IP AS found_in_2 | where isnull(found_in_2) | fields ip | outputlookup append=true override_if_empty=false 2.csv   This search first finds unique IPs, then uses lookup to check against 1.csv. It keeps only the IPs not found (where isnull(found_in_1)). It then repeats the process for 2.csv. Finally, it outputs the remaining IPs (those not found in either lookup) to 2.csv. Quick Tip: Double-check that the IP column in your lookup files doesn't contain leading/trailing whitespace https://docs.splunk.com/Documentation/Splunk/latest/Search/Aboutsubsearches#Subsearch_performance_considerations Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, there, I'm simplifying the context: We've had a perfectly working correlation rule for several years now, and for the past 2 days it hasn't been working properly. The command has to list IPs a... See more...
Hi, there, I'm simplifying the context: We've had a perfectly working correlation rule for several years now, and for the past 2 days it hasn't been working properly. The command has to list IPs and then check if these IPs are not in a first lookup and then in a second lookup. If the IPs are not in either lookup, an alert is triggered. The IPs are then added to the second lookup, so that they can be ignored for future searches. It looks like this: <my search> | dedup ip | search NOT [ | inputlookup 1.csv ] | search NOT [ | inputlookup 2.csv ] | fields ip | outputlookup append=true override_if_empty=false 2.csv The lookups are both identical: IP ------- 1.1.1.1 2.2.2.2 etc The first lookup has 1000 lines The second lookup has 55000 lines Everything was working fine, but now we have IPs that are triggering alerts despite being in the second lookup. Any ideas? Thanks a lot.        
Hi @hank72 , Its hard to picture exactly what you need, but I think you can achieve this by appending the results from the REST calls and the lookup, then using stats to find users common to all thr... See more...
Hi @hank72 , Its hard to picture exactly what you need, but I think you can achieve this by appending the results from the REST calls and the lookup, then using stats to find users common to all three sources.   | rest /services/authentication/users splunk_server=local | search type=SAML | fields title | rename title AS User | eval source="saml" | append [| inputlookup 12k_line.csv | fields User | eval source="lookup"] | append [| rest /servicesNS/-/-/directory splunk_server=local | fields author | dedup author | rename author AS User | eval source="directory"] | stats dc(source) as source_count by User | where source_count = 3 | fields User   The first section retrieves SAML users and assigns source="saml". The first append adds users from your lookup file, assigning source="lookup". The second append adds directory authors, assigning source="directory". stats dc(source) by User counts how many distinct sources (saml, lookup, directory) each user appears in. Note: Using multiple calls and append can be less performant depending on the volume of data returned by each source. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @christal654 , To resolve the installation rollback issue for the Splunk Universal Forwarder (UF), try the following steps: Remove Remaining Files and Registry Entries: Ensure the Splunk fold... See more...
Hi @christal654 , To resolve the installation rollback issue for the Splunk Universal Forwarder (UF), try the following steps: Remove Remaining Files and Registry Entries: Ensure the Splunk folder is deleted from C:\Program Files\. (Which I believe you have done already) Check for any remaining Splunk registry entries: Open regedit and navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Splunk and delete the Splunk key if it exists. Also check HKEY_CURRENT_USER\Software\Splunk and delete if necessary. Check for Pending Reboots Install Using Command Line: Sometimes, the GUI can cause issues. Try installing via command line: msiexec /i SplunkUniversalForwarder-<version>-x64-release.msi /quiet INSTALLDIR="C:\Program Files\SplunkUniversalForwarder" Replace <version> with the actual version number. Check Installation Logs - Look at the installation log files located in the %temp% directory (e.g., msi*.log`) to identify specific errors during the installation process. Permissions: Ensure you have administrative privileges when running the installer. If these steps do not resolve the issue, provide the error messages from the installation logs for further troubleshooting. For detailed troubleshooting, refer to the Splunk documentation on installation issues: Splunk Universal Forwarder Installation. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @AviSharma8 , Updated: According to the 8.2 docs its possible to upgrade a UF from 7.3->8.2 (Upgrading a universal forwarder directly to version 8.2 is supported from versions 7.3.x, 8.0.x, and..... See more...
Hi @AviSharma8 , Updated: According to the 8.2 docs its possible to upgrade a UF from 7.3->8.2 (Upgrading a universal forwarder directly to version 8.2 is supported from versions 7.3.x, 8.0.x, and...) and 9.4 supports an upgrade from 8.2 (Splunk supports a direct upgrade of a universal forwarder to version 9.4 from versions 8.2.x and hig...). This is contrary to the more broad upgrade path for a full Splunk installation (Not a UF) which has more steps due to more migrations taking place. In the meantime - Its also worth mentioning that Splunk Enterprise version 9.0 and higher requires Linux kernel version 3.x or higher and has an updated OS support list -  Check supported OS at https://docs.splunk.com/Documentation/Splunk/9.4.1/Installation/Systemrequirements I have seen other people upgrade 7.3 -> 9.x without issue in a single step, however your mileage may vary due to unknown migrations under the hood so you may wish to follow the official stance as above. Here's the general process for upgrading using the .tgz package on Linux for each of your 7.3->8.2->9.4 steps: Stop the existing Universal Forwarder: bash $SPLUNK_HOME/bin/splunk stop (Optional, but Recommended) Back up configuration: bash cp -rp $SPLUNK_HOME/etc /tmp/splunk_etc_backup_$(date +%F) Download and extract the new version: Download the latest UF .tgz package for your Linux architecture from splunk.com. Extract the package: bash tar xzf splunkforwarder*.tgz -C /opt Note: Ensure you extract it to the same parent directory as your existing $SPLUNK_HOME. For example, if your current UF is in /opt/splunkforwarder, extract the new version also into /opt. Start the forwarder: This requires accepting the license on the first start after an upgrade. $SPLUNK_HOME/bin/splunk start​ --accept-license --answer-yes A few additional Tips: Test the upgrade on a non-critical forwarder or a small group first. For upgrading many forwarders, consider using deployment tools like Ansible, Puppet, Chef, or SaltStack, or the Splunk Deployment Server. Always check the forwarder compatibility matrix to ensure the new UF version is compatible with your Splunk Enterprise indexers/deployment server. Useful Documentation Links: https://docs.splunk.com/Documentation/Forwarder/9.4.1/Forwarder/Upgradetheuniversalforwarder https://lantern.splunk.com/Splunk_Platform/Product_Tips/Upgrades_and_Migration/Upgrading_the_Splunk_platform Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi there! We finally found the solution!   On the SLM, Settings > General Setup > Edit the node > Disable monitoring then enable him   Thanks to all
Hi @Zoe_ , Yes, switching from UDP to TCP might resolve incomplete log events sent via syslog. UDP is a connectionless protocol and does not guarantee delivery or prevent truncation, especially for... See more...
Hi @Zoe_ , Yes, switching from UDP to TCP might resolve incomplete log events sent via syslog. UDP is a connectionless protocol and does not guarantee delivery or prevent truncation, especially for logs larger than the network's MTU size (typically around 1500 bytes). TCP establishes a connection, ensures ordered and reliable delivery, and handles segmentation and reassembly of large messages, preventing truncation. # Update your inputs.conf to receive TCP instead of UDP. [tcp://514] connection_host = ip # Or dns, depending on your needs sourcetype = syslog # Set appropriate sourcetype # Add other settings like index as needed disabled = 0 # Ensure your old UDP input is disabled #[udp://514] #disabled = 1 Remember to restart Splunk or reload the configuration after changes. Ensure any firewalls between the sending devices and the Splunk receiver allow traffic on the chosen TCP port (e.g., 514). Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I need to upgrade the universal forwarder agents on the multiple instance from the current 7.3.0 to the latest version. Can we directly upgrade it or need to go step by step. Let me know the process ... See more...
I need to upgrade the universal forwarder agents on the multiple instance from the current 7.3.0 to the latest version. Can we directly upgrade it or need to go step by step. Let me know the process with the best practice to upgrade it.
Do you mean something like this? index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source, "/aws/lambda/api-data-test-(.*)"), replace(source, "/aws/lambda/ap... See more...
Do you mean something like this? index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source, "/aws/lambda/api-data-test-(.*)"), replace(source, "/aws/lambda/api-data-test-(.*)", "data/\\1"), source) | timechart span=1m count by source  
This is a very confusing search.  Is this some sort of homework?  The reason why I ask is because it is always better to describe what you are trying to achieve, what the data looks like, what is the... See more...
This is a very confusing search.  Is this some sort of homework?  The reason why I ask is because it is always better to describe what you are trying to achieve, what the data looks like, what is the desired results based on sample input, what is the logic between input and desired output.  This, as opposed to make volunteers read your mind based on some SPL snippets.
With UDP you are limited to a single datagram's length. I think I saw one or two solutions which would split the event at frame boundary and send the remaining part in another frame but there's no re... See more...
With UDP you are limited to a single datagram's length. I think I saw one or two solutions which would split the event at frame boundary and send the remaining part in another frame but there's no reliable way to reassemble those later. Yes, switching to TCP should solve this issue but might cause performance problems in some cases.
Hi  I am working on below query to get Count of requests processed by each API service per minute index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source,... See more...
Hi  I am working on below query to get Count of requests processed by each API service per minute index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source, "/aws/lambda/api-data-test-(.*)"), replace(source, "/aws/lambda/api-data-test-(.*)", "data/\\1"), source) | bucket _time span=1m | stats count by source, _time i get below result for one source "name" ,second source by address,third source by city . How can i represent different api source with per minute in good understandable format...either graph or pictorial representation source _time count        data/name 2025-03-02 08:13:00 2   data/name 2025-03-02 08:14:00 57   data/name 2025-03-02 08:15:00 347   data/name 2025-03-02 08:16:00 62   data/name 2025-03-02 08:17:00 48     data/address 2025-03-02 08:18:00 21   data/city 2025-03-02 08:19:00 66   data/city 2025-03-02 08:20:00 55   data/address 2025-03-02 08:21:00 7   name event {"name":"log","awsRequestId":"aws","hostname":"1","pid":8,"level":30,"requestType":"GET","entity":"name","client":"Ha2@gmail.com","domain":"name.io","queryParams":{"identifier":"977265"},"responseTime":320,"msg":"responseTime","time":"2025-03-02T03:23:40.504Z","v":0} address event {"name":"log","awsRequestId":"aws","hostname":"1","pid":8,"level":30,"requestType":"GET","entity":"address","client":"Harggg2@gmail.com","domain":"name.io","queryParams":{"identifier":"977265"},"responseTime":320,"msg":"responseTime","time":"2025-03-02T03:23:40.504Z","v":0}