All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That solution isn't perfect but it's a good tips, thanks dude
The lookup has already been defined. The variable is not really “ip” so in the definition should we put CIDR(ip) because it's an IP or CIDR() to define the variable it should take into account? In bo... See more...
The lookup has already been defined. The variable is not really “ip” so in the definition should we put CIDR(ip) because it's an IP or CIDR() to define the variable it should take into account? In both cases, this solution doesn't work. It can't find the IPs in the lookup's CIDRs...
Hello everyone,    I need help with determining the time needed from an analyst to investigate the alert and close it .  for more clarity I want to calculate the time spent from when the statu... See more...
Hello everyone,    I need help with determining the time needed from an analyst to investigate the alert and close it .  for more clarity I want to calculate the time spent from when the status_label field value updated from In Progress to (Closed or Resolved ) Hence that the default value of this field in New.  I am new at splunk so please write full query and I will adjust it for my needs. 
Hi Simon, OS: linux Splunk version: 9.1.8 AME version: 3.2.3
 I have tested the query and can confirm that some metrics are being processed, as I mentioned in my initial question. For example: 04-02-2025 08:17:01.787 +0000 INFO Metrics - group=per_source... See more...
 I have tested the query and can confirm that some metrics are being processed, as I mentioned in my initial question. For example: 04-02-2025 08:17:01.787 +0000 INFO Metrics - group=per_source_thruput, ingest_pipe=1, series="/opt/splunk/var/log/splunk/splunk_ta_microsoft-cloudservices_azure_resource.log", kbps="1.382", eps="3.766", kb="41.450", ev=113, avg_age="2.867", max_age=4   However, I still don’t see any logs related to Load Balancers. Regarding your question: Yes, the index for this metric input is a metric index, not an event index. That said, I have downloaded the source code of the add-on and I suspect the issue might be on the Azure side. Here’s why: I debugged the execution flow: _index_resource_metrics: This function prepares the request. Here, I do see references to the Load Balancer. _index_metrics: This function calls _fetch_resource_metrics, which sends the request to Azure and processes the results in a loop. If the response is empty, there is no log message indicating that Azure returned no data. No relevant errors or exceptions were found in the logs.
My guess would be no - it is likely that return value of zero is used in multiple unrelated events. Perhaps you have over sanitised the events which has hidden some clues which might help us suggest... See more...
My guess would be no - it is likely that return value of zero is used in multiple unrelated events. Perhaps you have over sanitised the events which has hidden some clues which might help us suggest other ways to group the events. Please share at least two sets of events, related to at least two different business events, e.g. downloading different external documents, with as little sanitisation as possible. Obviously, try not to give away any sensitive or proprietary information. Given that you have three accessDates for the sets of events, and assuming DocumentId is unique for your business event / download, you could also try something like this | eval parameters=json_array_to_mv(json_extract_exact(_raw,"parameters")) | mvexpand parameters | spath input=parameters | spath accessDate | eval name="field_".trim(name,"@") | eval {name}=value | stats values(field_*) as * by accessDate | stats values(*) as * by DocumentId  
Thank so much!!!
Hi @Treize  If your "ip" field in the lookup is a CIDR then configure it as an lookup definition (rather than referencing it as <lookup1>.csv and then under Advanced Options set the Match type to CI... See more...
Hi @Treize  If your "ip" field in the lookup is a CIDR then configure it as an lookup definition (rather than referencing it as <lookup1>.csv and then under Advanced Options set the Match type to CIDR(ip) as below:   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issu Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
You could break up the csv so that you process it in chunks of 10000 events | makeresults count=50010 | streamstats count as row | search NOT [| inputlookup large.csv | head 10000] | search... See more...
You could break up the csv so that you process it in chunks of 10000 events | makeresults count=50010 | streamstats count as row | search NOT [| inputlookup large.csv | head 10000] | search NOT [| inputlookup large.csv | head 20000 | tail 10000] | search NOT [| inputlookup large.csv | head 30000 | tail 10000] | search NOT [| inputlookup large.csv | head 40000 | tail 10000] | search NOT [| inputlookup large.csv | head 50000 | tail 10000]  
Since we had the same proplem in our company, this was the official splunk support response: The team informed that this known issue is fixed in the upcoming Splunk 9.4.3 release. Since we do not cu... See more...
Since we had the same proplem in our company, this was the official splunk support response: The team informed that this known issue is fixed in the upcoming Splunk 9.4.3 release. Since we do not currently have an ETA for the release, they have suggested a workaround meanwhile,    In the web-features.conf file in the etc/system/local directory, under the stanza [feature:page_migration] add 2 flags -   enable_admin_LDAP-groups_vnext = false enable_authentication_providers_LDAP_vnext = false This should fix the issue
Thanks, I'll keep the optimization in my mind and implement it as soon as my problem is solved.
Hello and thank you for your suggestion   It works pretty well indeed! The only problem, and it's my fault, I forgot to specify that the 1.csv contains CIDRs and not simple IPs... Any ideas? ... See more...
Hello and thank you for your suggestion   It works pretty well indeed! The only problem, and it's my fault, I forgot to specify that the 1.csv contains CIDRs and not simple IPs... Any ideas?   I'd never have thought of the limits of subsearch. How has it worked all these years when we exceeded 10,000 lines years ago?
On the LB they keep saying it's not possible to check for the http status. Is there any way to render the response static?   Luca
Hi @bhupalbobbadi , if I correctly understood your problem, your second SH doesn't send emails, is it correct? did you checked that the connection between this SH and the mail server (using telnet ... See more...
Hi @bhupalbobbadi , if I correctly understood your problem, your second SH doesn't send emails, is it correct? did you checked that the connection between this SH and the mail server (using telnet from the SH to the mail server) is open on the used port? Ciao. Giuseppe
Your subsearch only returns a limited number of rows (possibly only 10000 not the 55000 you were expecting). As an experiment, I tried this | makeresults count=50001 | streamstats count as row | ou... See more...
Your subsearch only returns a limited number of rows (possibly only 10000 not the 55000 you were expecting). As an experiment, I tried this | makeresults count=50001 | streamstats count as row | outputlookup large.csv Followed by | makeresults count=50010 | streamstats count as row | search NOT [| inputlookup large.csv] I still get 40010 rows not the 9 that I "should" get  
Hi @Treize , I don't know why your lookup doesn't correctly run, how many lines there are in your lookups? maybe you supered the limit of 10,000 rows. Then, did you changed the name of the "ip" fi... See more...
Hi @Treize , I don't know why your lookup doesn't correctly run, how many lines there are in your lookups? maybe you supered the limit of 10,000 rows. Then, did you changed the name of the "ip" field in the main search or in the lookups? anyway, your search isn't optimized: it should be better to insert the two subsearches in the main search and declare the field to use in the search using the field command: <my search> NOT ([ | inputlookup 1.csv | fields ip ] OR [ | inputlookup 2.csv | fields ip ] | dedup ip | fields ip | outputlookup append=true override_if_empty=false 2.csv Ciao. Giuseppe  
Hi @Treize , The issue is likely due to subsearch limitations. Subsearches have default limits on the number of results (10,000) and runtime, and your second lookup (55,000 lines) exceeds the defaul... See more...
Hi @Treize , The issue is likely due to subsearch limitations. Subsearches have default limits on the number of results (10,000) and runtime, and your second lookup (55,000 lines) exceeds the default output limit. This means the subsearch [ | inputlookup 2.csv ] might only be returning the first 10,000 IPs, causing the outer search to incorrectly identify IPs beyond that limit as "not found". Use the lookup command instead, which is more efficient and avoids these limits. | dedup ip | lookup 1.csv IP AS ip OUTPUT IP AS found_in_1 | where isnull(found_in_1) | lookup 2.csv IP AS ip OUTPUT IP AS found_in_2 | where isnull(found_in_2) | fields ip | outputlookup append=true override_if_empty=false 2.csv   This search first finds unique IPs, then uses lookup to check against 1.csv. It keeps only the IPs not found (where isnull(found_in_1)). It then repeats the process for 2.csv. Finally, it outputs the remaining IPs (those not found in either lookup) to 2.csv. Quick Tip: Double-check that the IP column in your lookup files doesn't contain leading/trailing whitespace https://docs.splunk.com/Documentation/Splunk/latest/Search/Aboutsubsearches#Subsearch_performance_considerations Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, there, I'm simplifying the context: We've had a perfectly working correlation rule for several years now, and for the past 2 days it hasn't been working properly. The command has to list IPs a... See more...
Hi, there, I'm simplifying the context: We've had a perfectly working correlation rule for several years now, and for the past 2 days it hasn't been working properly. The command has to list IPs and then check if these IPs are not in a first lookup and then in a second lookup. If the IPs are not in either lookup, an alert is triggered. The IPs are then added to the second lookup, so that they can be ignored for future searches. It looks like this: <my search> | dedup ip | search NOT [ | inputlookup 1.csv ] | search NOT [ | inputlookup 2.csv ] | fields ip | outputlookup append=true override_if_empty=false 2.csv The lookups are both identical: IP ------- 1.1.1.1 2.2.2.2 etc The first lookup has 1000 lines The second lookup has 55000 lines Everything was working fine, but now we have IPs that are triggering alerts despite being in the second lookup. Any ideas? Thanks a lot.        
Hi @hank72 , Its hard to picture exactly what you need, but I think you can achieve this by appending the results from the REST calls and the lookup, then using stats to find users common to all thr... See more...
Hi @hank72 , Its hard to picture exactly what you need, but I think you can achieve this by appending the results from the REST calls and the lookup, then using stats to find users common to all three sources.   | rest /services/authentication/users splunk_server=local | search type=SAML | fields title | rename title AS User | eval source="saml" | append [| inputlookup 12k_line.csv | fields User | eval source="lookup"] | append [| rest /servicesNS/-/-/directory splunk_server=local | fields author | dedup author | rename author AS User | eval source="directory"] | stats dc(source) as source_count by User | where source_count = 3 | fields User   The first section retrieves SAML users and assigns source="saml". The first append adds users from your lookup file, assigning source="lookup". The second append adds directory authors, assigning source="directory". stats dc(source) by User counts how many distinct sources (saml, lookup, directory) each user appears in. Note: Using multiple calls and append can be less performant depending on the volume of data returned by each source. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @christal654 , To resolve the installation rollback issue for the Splunk Universal Forwarder (UF), try the following steps: Remove Remaining Files and Registry Entries: Ensure the Splunk fold... See more...
Hi @christal654 , To resolve the installation rollback issue for the Splunk Universal Forwarder (UF), try the following steps: Remove Remaining Files and Registry Entries: Ensure the Splunk folder is deleted from C:\Program Files\. (Which I believe you have done already) Check for any remaining Splunk registry entries: Open regedit and navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Splunk and delete the Splunk key if it exists. Also check HKEY_CURRENT_USER\Software\Splunk and delete if necessary. Check for Pending Reboots Install Using Command Line: Sometimes, the GUI can cause issues. Try installing via command line: msiexec /i SplunkUniversalForwarder-<version>-x64-release.msi /quiet INSTALLDIR="C:\Program Files\SplunkUniversalForwarder" Replace <version> with the actual version number. Check Installation Logs - Look at the installation log files located in the %temp% directory (e.g., msi*.log`) to identify specific errors during the installation process. Permissions: Ensure you have administrative privileges when running the installer. If these steps do not resolve the issue, provide the error messages from the installation logs for further troubleshooting. For detailed troubleshooting, refer to the Splunk documentation on installation issues: Splunk Universal Forwarder Installation. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing