All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

My client wants to know if users do not connect in 90 days they can be blocked
Hi Everyone -  The error below indicates that the field containing the IP address does not exists in the events. The custom command is looking for a field supplied via "field" attribute to the "ipdet... See more...
Hi Everyone -  The error below indicates that the field containing the IP address does not exists in the events. The custom command is looking for a field supplied via "field" attribute to the "ipdetection" command. Please make sure you have the correct "field" value specified.   For example:   ... | ipdetection field=ip // sample usage when ip field contains IP address value ... | rex field=_raw "(?<ip_address>d{1,3}.d{1,3}.d{1,3}.d{1,3})" | ipdetection field=ip_address // sample usage when you need to extract IP address from raw event   Additional command options can be found in the documentation https://ta-ipqualityscore.readthedocs.io/en/latest/ipdetection.html   Please feel free to reach out if you experience any issues: support@ipqualityscore.com
@ankitarath2011 , the issue discussed in this post occurs in Splunk 9.0.0 and 9.0.1. It does not occur in 8.x versions nor in versions from 9.0.2 onward. How is your environment currently configured... See more...
@ankitarath2011 , the issue discussed in this post occurs in Splunk 9.0.0 and 9.0.1. It does not occur in 8.x versions nor in versions from 9.0.2 onward. How is your environment currently configured? See https://docs.splunk.com/Documentation/Splunk/8.2.3/DistSearch/PropagateSHCconfigurationchanges.  What deployer push mode are you using, and where are you setting this value in app.conf? Within an app on the search heads, or in $SPLUNK_HOME/etc/system/local/app.conf on the Deployer?
What should I search for in splunkd log? And how can I check _internal index?
Actually, the correction was made to 9.0.2. See my comment above... which also provides a workaround for those running 9.0.0 or 9.0.1.
I apologize for the confusion.  I found out that the splunk field extraction feature failed to extract the fields becuase of two different delimiter pipe and space. Looks I need to change  log forma... See more...
I apologize for the confusion.  I found out that the splunk field extraction feature failed to extract the fields becuase of two different delimiter pipe and space. Looks I need to change  log format to all pipes or spaces so that splun would be able to extract the fields correctly.
Did you found anything from splunkd.log or _internal index?
What do you mean by "code quality"? And how would you like to measure that?
@pc1  Can you please advise the solution if you manage to resolve this issue. Thanks!
Now, I aim to replace the location using an automatic lookup based on the ID "EF_97324_pewpew_sla." Unfortunately, I encounter an issue where I either retrieve only the location from the table, omi... See more...
Now, I aim to replace the location using an automatic lookup based on the ID "EF_97324_pewpew_sla." Unfortunately, I encounter an issue where I either retrieve only the location from the table, omitting the rest, or I only receive the values extracted from the field extraction. I think you meant to say that your extraction populates location field with every id, even in those that do not contain location information.  Instead of creating a table with all possible id's, you want to use a sparsely populated lookup to selectively override "bad" location value in those events with "bad" id's.  Is this correct? Let me restate the requirement as this: if a lookup value exists, you want it to take precedence over any value your field extraction populates; if a lookup value does not exist, use the extracted value. SPL can use coalesce to signal precedence.  You need to name extraction and lookup fields differently.  Say, you name your extracted field location_may_be_bad, and the lookup output field just location, you can then use this to get the location | eval location = coalesce(location, location_may_be_bad) Hope this helps.
Hello, I am new to splunk and I trying to extract the fields using built-in feature.  Since the log format contain both the pipe as well as spaces, the in-built field extraction was unable to work. ... See more...
Hello, I am new to splunk and I trying to extract the fields using built-in feature.  Since the log format contain both the pipe as well as spaces, the in-built field extraction was unable to work. I was trying to extract the field before pipe as "name" , after pipe as "size" , after first space as "value" as shown in below.  I doesn't care about last values like 1547, 1458, 1887.   Any help would be appreciated.   Name size value abc-pendingcardtransfer-networki 30 77784791 log-incomingtransaction-datainpu 3 78786821 dog-acceptedtransactions-incoming 1 7465466           Sample Logs:   9/2/22 11:52:39.005 AM abc-pendingcardtransfer-networki|30 77784791 1547 9/2/22 11:50:39.005 AM log-incomingtransaction-datainpu|3 78786821 1458 9/2/22 11:45:39.005 AM [INFO] 2022-09-01 13:52:38.22 [main] ApacheInactivityMonitor - Number of input traffic is 25 9/2/22 11:44:39.005 AM dog-acceptedtransactions-incoming|1 7465466 1887       Thank You
Was given the incorrect information on last post. Our Splunk is On-Prem and we want to migrate to the Cloud.  Will we be given the option to use On-Prem and cloud as a hybrid when migrating ?  Als... See more...
Was given the incorrect information on last post. Our Splunk is On-Prem and we want to migrate to the Cloud.  Will we be given the option to use On-Prem and cloud as a hybrid when migrating ?  Also options for forwarding redundancy during migration?     Thank you 
Hi @Jubin.Patel, I searched the community and didn't find any existing posts. I found an existing Support ticket that went into great detail and based on what I read, I think for this issue, it's b... See more...
Hi @Jubin.Patel, I searched the community and didn't find any existing posts. I found an existing Support ticket that went into great detail and based on what I read, I think for this issue, it's best if you create a Support ticket as the issue was quite involved. How do I submit a Support ticket? An FAQ  if you are able to offer a post solution summary as a reply here that would be appreciated.
Do we have anything (i.e. Add-on or functionality) to check the code quality of our Splunk dashboards, reports and alerts ?
Each of those searches already has at least one specific filter so you know how to do that.  Please explain exactly what you want from us.
  Deferred Searches:   | rest /servicesNS/-/-/search/jobs splunk_server=local | search dispatchState="DEFERRED" isSavedSearch=1 | search title IN ("*outputcsv*","*outputlookup*","*collect*") ... See more...
  Deferred Searches:   | rest /servicesNS/-/-/search/jobs splunk_server=local | search dispatchState="DEFERRED" isSavedSearch=1 | search title IN ("*outputcsv*","*outputlookup*","*collect*") | table label  dispatchState reason published updated title Skipped Search:   index=_internal sourcetype=scheduler status=skipped     [| rest /servicesNS/-/-/saved/searches splunk_server=local     | search search IN ("*outputcsv *" ,"*outputlookup *" )     | table title     | rename title as savedsearch_name] | stats count by app search_type reason savedsearch_name | sort - count Searches ran with error:   | rest /servicesNS/-/-/search/jobs splunk_server=local | search isSavedSearch=1 isFailed=1 | search title IN ("*outputcsv*","*outputlookup*","*collect*") | table label dispatchState reason published updated messages.fatal title Saved Search with collect command generated 0 events: index=_internal sourcetype=scheduler result_count=0     [| rest /servicesNS/-/-/saved/searches splunk_server=local     | search  search="*collect*"     | table title     | rename title as savedsearch_name] | table _time user app savedsearch_name status scheduled_time run_time result_count |  convert ctime(scheduled_time)
`macro1("my_parent()")` works because the parenthesis is inside of the double quotes. `macro1(my_paren())` does not work because there is no double quotes around it.
So I've running Splunk Enterprise on a supported system, Windows 22H2, 64 Bit. But how can I solve the issue regarding the app and task list?
Hi @nathanhfraenkel, I never saw this issue and I deployed apps on many thousands of windows clients. If you have this issue, open a case to Splunk Support. Ciao. Giuseppe
Hi @corti77, you can collect data from Windows Defender using the Splunk Add-On for Windows Security (https://splunkbase.splunk.com/app/6207) that's also accepted by Microsoft (https://techcommunity... See more...
Hi @corti77, you can collect data from Windows Defender using the Splunk Add-On for Windows Security (https://splunkbase.splunk.com/app/6207) that's also accepted by Microsoft (https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/the-splunk-add-on-for-microsoft-security-is-now-available/ba-p/3171272) Ciao. Giuseppe