All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Simon, OS: linux Splunk version: 9.1.8 AME version: 3.2.3
 I have tested the query and can confirm that some metrics are being processed, as I mentioned in my initial question. For example: 04-02-2025 08:17:01.787 +0000 INFO Metrics - group=per_source... See more...
 I have tested the query and can confirm that some metrics are being processed, as I mentioned in my initial question. For example: 04-02-2025 08:17:01.787 +0000 INFO Metrics - group=per_source_thruput, ingest_pipe=1, series="/opt/splunk/var/log/splunk/splunk_ta_microsoft-cloudservices_azure_resource.log", kbps="1.382", eps="3.766", kb="41.450", ev=113, avg_age="2.867", max_age=4   However, I still don’t see any logs related to Load Balancers. Regarding your question: Yes, the index for this metric input is a metric index, not an event index. That said, I have downloaded the source code of the add-on and I suspect the issue might be on the Azure side. Here’s why: I debugged the execution flow: _index_resource_metrics: This function prepares the request. Here, I do see references to the Load Balancer. _index_metrics: This function calls _fetch_resource_metrics, which sends the request to Azure and processes the results in a loop. If the response is empty, there is no log message indicating that Azure returned no data. No relevant errors or exceptions were found in the logs.
My guess would be no - it is likely that return value of zero is used in multiple unrelated events. Perhaps you have over sanitised the events which has hidden some clues which might help us suggest... See more...
My guess would be no - it is likely that return value of zero is used in multiple unrelated events. Perhaps you have over sanitised the events which has hidden some clues which might help us suggest other ways to group the events. Please share at least two sets of events, related to at least two different business events, e.g. downloading different external documents, with as little sanitisation as possible. Obviously, try not to give away any sensitive or proprietary information. Given that you have three accessDates for the sets of events, and assuming DocumentId is unique for your business event / download, you could also try something like this | eval parameters=json_array_to_mv(json_extract_exact(_raw,"parameters")) | mvexpand parameters | spath input=parameters | spath accessDate | eval name="field_".trim(name,"@") | eval {name}=value | stats values(field_*) as * by accessDate | stats values(*) as * by DocumentId  
Thank so much!!!
Hi @Treize  If your "ip" field in the lookup is a CIDR then configure it as an lookup definition (rather than referencing it as <lookup1>.csv and then under Advanced Options set the Match type to CI... See more...
Hi @Treize  If your "ip" field in the lookup is a CIDR then configure it as an lookup definition (rather than referencing it as <lookup1>.csv and then under Advanced Options set the Match type to CIDR(ip) as below:   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issu Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
You could break up the csv so that you process it in chunks of 10000 events | makeresults count=50010 | streamstats count as row | search NOT [| inputlookup large.csv | head 10000] | search... See more...
You could break up the csv so that you process it in chunks of 10000 events | makeresults count=50010 | streamstats count as row | search NOT [| inputlookup large.csv | head 10000] | search NOT [| inputlookup large.csv | head 20000 | tail 10000] | search NOT [| inputlookup large.csv | head 30000 | tail 10000] | search NOT [| inputlookup large.csv | head 40000 | tail 10000] | search NOT [| inputlookup large.csv | head 50000 | tail 10000]  
Since we had the same proplem in our company, this was the official splunk support response: The team informed that this known issue is fixed in the upcoming Splunk 9.4.3 release. Since we do not cu... See more...
Since we had the same proplem in our company, this was the official splunk support response: The team informed that this known issue is fixed in the upcoming Splunk 9.4.3 release. Since we do not currently have an ETA for the release, they have suggested a workaround meanwhile,    In the web-features.conf file in the etc/system/local directory, under the stanza [feature:page_migration] add 2 flags -   enable_admin_LDAP-groups_vnext = false enable_authentication_providers_LDAP_vnext = false This should fix the issue
Thanks, I'll keep the optimization in my mind and implement it as soon as my problem is solved.
Hello and thank you for your suggestion   It works pretty well indeed! The only problem, and it's my fault, I forgot to specify that the 1.csv contains CIDRs and not simple IPs... Any ideas? ... See more...
Hello and thank you for your suggestion   It works pretty well indeed! The only problem, and it's my fault, I forgot to specify that the 1.csv contains CIDRs and not simple IPs... Any ideas?   I'd never have thought of the limits of subsearch. How has it worked all these years when we exceeded 10,000 lines years ago?
On the LB they keep saying it's not possible to check for the http status. Is there any way to render the response static?   Luca
Hi @bhupalbobbadi , if I correctly understood your problem, your second SH doesn't send emails, is it correct? did you checked that the connection between this SH and the mail server (using telnet ... See more...
Hi @bhupalbobbadi , if I correctly understood your problem, your second SH doesn't send emails, is it correct? did you checked that the connection between this SH and the mail server (using telnet from the SH to the mail server) is open on the used port? Ciao. Giuseppe
Your subsearch only returns a limited number of rows (possibly only 10000 not the 55000 you were expecting). As an experiment, I tried this | makeresults count=50001 | streamstats count as row | ou... See more...
Your subsearch only returns a limited number of rows (possibly only 10000 not the 55000 you were expecting). As an experiment, I tried this | makeresults count=50001 | streamstats count as row | outputlookup large.csv Followed by | makeresults count=50010 | streamstats count as row | search NOT [| inputlookup large.csv] I still get 40010 rows not the 9 that I "should" get  
Hi @Treize , I don't know why your lookup doesn't correctly run, how many lines there are in your lookups? maybe you supered the limit of 10,000 rows. Then, did you changed the name of the "ip" fi... See more...
Hi @Treize , I don't know why your lookup doesn't correctly run, how many lines there are in your lookups? maybe you supered the limit of 10,000 rows. Then, did you changed the name of the "ip" field in the main search or in the lookups? anyway, your search isn't optimized: it should be better to insert the two subsearches in the main search and declare the field to use in the search using the field command: <my search> NOT ([ | inputlookup 1.csv | fields ip ] OR [ | inputlookup 2.csv | fields ip ] | dedup ip | fields ip | outputlookup append=true override_if_empty=false 2.csv Ciao. Giuseppe  
Hi @Treize , The issue is likely due to subsearch limitations. Subsearches have default limits on the number of results (10,000) and runtime, and your second lookup (55,000 lines) exceeds the defaul... See more...
Hi @Treize , The issue is likely due to subsearch limitations. Subsearches have default limits on the number of results (10,000) and runtime, and your second lookup (55,000 lines) exceeds the default output limit. This means the subsearch [ | inputlookup 2.csv ] might only be returning the first 10,000 IPs, causing the outer search to incorrectly identify IPs beyond that limit as "not found". Use the lookup command instead, which is more efficient and avoids these limits. | dedup ip | lookup 1.csv IP AS ip OUTPUT IP AS found_in_1 | where isnull(found_in_1) | lookup 2.csv IP AS ip OUTPUT IP AS found_in_2 | where isnull(found_in_2) | fields ip | outputlookup append=true override_if_empty=false 2.csv   This search first finds unique IPs, then uses lookup to check against 1.csv. It keeps only the IPs not found (where isnull(found_in_1)). It then repeats the process for 2.csv. Finally, it outputs the remaining IPs (those not found in either lookup) to 2.csv. Quick Tip: Double-check that the IP column in your lookup files doesn't contain leading/trailing whitespace https://docs.splunk.com/Documentation/Splunk/latest/Search/Aboutsubsearches#Subsearch_performance_considerations Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, there, I'm simplifying the context: We've had a perfectly working correlation rule for several years now, and for the past 2 days it hasn't been working properly. The command has to list IPs a... See more...
Hi, there, I'm simplifying the context: We've had a perfectly working correlation rule for several years now, and for the past 2 days it hasn't been working properly. The command has to list IPs and then check if these IPs are not in a first lookup and then in a second lookup. If the IPs are not in either lookup, an alert is triggered. The IPs are then added to the second lookup, so that they can be ignored for future searches. It looks like this: <my search> | dedup ip | search NOT [ | inputlookup 1.csv ] | search NOT [ | inputlookup 2.csv ] | fields ip | outputlookup append=true override_if_empty=false 2.csv The lookups are both identical: IP ------- 1.1.1.1 2.2.2.2 etc The first lookup has 1000 lines The second lookup has 55000 lines Everything was working fine, but now we have IPs that are triggering alerts despite being in the second lookup. Any ideas? Thanks a lot.        
Hi @hank72 , Its hard to picture exactly what you need, but I think you can achieve this by appending the results from the REST calls and the lookup, then using stats to find users common to all thr... See more...
Hi @hank72 , Its hard to picture exactly what you need, but I think you can achieve this by appending the results from the REST calls and the lookup, then using stats to find users common to all three sources.   | rest /services/authentication/users splunk_server=local | search type=SAML | fields title | rename title AS User | eval source="saml" | append [| inputlookup 12k_line.csv | fields User | eval source="lookup"] | append [| rest /servicesNS/-/-/directory splunk_server=local | fields author | dedup author | rename author AS User | eval source="directory"] | stats dc(source) as source_count by User | where source_count = 3 | fields User   The first section retrieves SAML users and assigns source="saml". The first append adds users from your lookup file, assigning source="lookup". The second append adds directory authors, assigning source="directory". stats dc(source) by User counts how many distinct sources (saml, lookup, directory) each user appears in. Note: Using multiple calls and append can be less performant depending on the volume of data returned by each source. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @christal654 , To resolve the installation rollback issue for the Splunk Universal Forwarder (UF), try the following steps: Remove Remaining Files and Registry Entries: Ensure the Splunk fold... See more...
Hi @christal654 , To resolve the installation rollback issue for the Splunk Universal Forwarder (UF), try the following steps: Remove Remaining Files and Registry Entries: Ensure the Splunk folder is deleted from C:\Program Files\. (Which I believe you have done already) Check for any remaining Splunk registry entries: Open regedit and navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Splunk and delete the Splunk key if it exists. Also check HKEY_CURRENT_USER\Software\Splunk and delete if necessary. Check for Pending Reboots Install Using Command Line: Sometimes, the GUI can cause issues. Try installing via command line: msiexec /i SplunkUniversalForwarder-<version>-x64-release.msi /quiet INSTALLDIR="C:\Program Files\SplunkUniversalForwarder" Replace <version> with the actual version number. Check Installation Logs - Look at the installation log files located in the %temp% directory (e.g., msi*.log`) to identify specific errors during the installation process. Permissions: Ensure you have administrative privileges when running the installer. If these steps do not resolve the issue, provide the error messages from the installation logs for further troubleshooting. For detailed troubleshooting, refer to the Splunk documentation on installation issues: Splunk Universal Forwarder Installation. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @AviSharma8 , Updated: According to the 8.2 docs its possible to upgrade a UF from 7.3->8.2 (Upgrading a universal forwarder directly to version 8.2 is supported from versions 7.3.x, 8.0.x, and..... See more...
Hi @AviSharma8 , Updated: According to the 8.2 docs its possible to upgrade a UF from 7.3->8.2 (Upgrading a universal forwarder directly to version 8.2 is supported from versions 7.3.x, 8.0.x, and...) and 9.4 supports an upgrade from 8.2 (Splunk supports a direct upgrade of a universal forwarder to version 9.4 from versions 8.2.x and hig...). This is contrary to the more broad upgrade path for a full Splunk installation (Not a UF) which has more steps due to more migrations taking place. In the meantime - Its also worth mentioning that Splunk Enterprise version 9.0 and higher requires Linux kernel version 3.x or higher and has an updated OS support list -  Check supported OS at https://docs.splunk.com/Documentation/Splunk/9.4.1/Installation/Systemrequirements I have seen other people upgrade 7.3 -> 9.x without issue in a single step, however your mileage may vary due to unknown migrations under the hood so you may wish to follow the official stance as above. Here's the general process for upgrading using the .tgz package on Linux for each of your 7.3->8.2->9.4 steps: Stop the existing Universal Forwarder: bash $SPLUNK_HOME/bin/splunk stop (Optional, but Recommended) Back up configuration: bash cp -rp $SPLUNK_HOME/etc /tmp/splunk_etc_backup_$(date +%F) Download and extract the new version: Download the latest UF .tgz package for your Linux architecture from splunk.com. Extract the package: bash tar xzf splunkforwarder*.tgz -C /opt Note: Ensure you extract it to the same parent directory as your existing $SPLUNK_HOME. For example, if your current UF is in /opt/splunkforwarder, extract the new version also into /opt. Start the forwarder: This requires accepting the license on the first start after an upgrade. $SPLUNK_HOME/bin/splunk start​ --accept-license --answer-yes A few additional Tips: Test the upgrade on a non-critical forwarder or a small group first. For upgrading many forwarders, consider using deployment tools like Ansible, Puppet, Chef, or SaltStack, or the Splunk Deployment Server. Always check the forwarder compatibility matrix to ensure the new UF version is compatible with your Splunk Enterprise indexers/deployment server. Useful Documentation Links: https://docs.splunk.com/Documentation/Forwarder/9.4.1/Forwarder/Upgradetheuniversalforwarder https://lantern.splunk.com/Splunk_Platform/Product_Tips/Upgrades_and_Migration/Upgrading_the_Splunk_platform Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi there! We finally found the solution!   On the SLM, Settings > General Setup > Edit the node > Disable monitoring then enable him   Thanks to all
Hi @Zoe_ , Yes, switching from UDP to TCP might resolve incomplete log events sent via syslog. UDP is a connectionless protocol and does not guarantee delivery or prevent truncation, especially for... See more...
Hi @Zoe_ , Yes, switching from UDP to TCP might resolve incomplete log events sent via syslog. UDP is a connectionless protocol and does not guarantee delivery or prevent truncation, especially for logs larger than the network's MTU size (typically around 1500 bytes). TCP establishes a connection, ensures ordered and reliable delivery, and handles segmentation and reassembly of large messages, preventing truncation. # Update your inputs.conf to receive TCP instead of UDP. [tcp://514] connection_host = ip # Or dns, depending on your needs sourcetype = syslog # Set appropriate sourcetype # Add other settings like index as needed disabled = 0 # Ensure your old UDP input is disabled #[udp://514] #disabled = 1 Remember to restart Splunk or reload the configuration after changes. Ensure any firewalls between the sending devices and the Splunk receiver allow traffic on the chosen TCP port (e.g., 514). Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing