All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Bonjour Amory, NetFlow Analytics for Splunk App and TA-netflow are designed to work with NetFlow Optimizer. For details, please visit: https://docs.netflowlogic.com/integrations-and-apps/integrati... See more...
Bonjour Amory, NetFlow Analytics for Splunk App and TA-netflow are designed to work with NetFlow Optimizer. For details, please visit: https://docs.netflowlogic.com/integrations-and-apps/integrations-with-splunk/ If you have any questions or would like to see a demo, please contact us at team_splunk@netflowlogic.com    
Hello all, new poster here. I have a csv file with a column full of Splunk queries. I am trying to enrich my Splunk instance with the data from the csv file via the following command:        inde... See more...
Hello all, new poster here. I have a csv file with a column full of Splunk queries. I am trying to enrich my Splunk instance with the data from the csv file via the following command:        index="index1" [ inputlookup rules.csv | eval search = if(boolean=="FALSE","\""+rule+"\"",rule) | return 10000 $search] | fields _time index | eval time_token = "_time=" + _time | eval index_token = "index=" + index | stats values(time_token) AS time_token values(index_token) AS index_token | eval time_token=mvjoin(time_token," OR ") | eval index_token=mvjoin(index_token," OR ") | append [ inputlookup rules.csv | eval rule = if(boolean=="FALSE","\""+rule+"\"",rule) | return 10000 $rule] | eventstats first(time_token) AS time_token first(index_token) AS index_token | search rule=* | map maxsearches=100 search="search [| makeresults | eval search= \"$time_token$ $index_token$ $rule$\" | return $search] | eval rule_found=\"$rule$\", rule_id=\"$id$\""       The problem I am having is with the "map" command. everything after the second "search" is greyed out and not being included in the search. I have been able to get the following portion of the code working:      index="index1" [ inputlookup rules.csv | eval search = if(boolean=="FALSE","\""+rule+"\"",rule) | return 10000 $search] | fields _time index | eval time_token = "_time=" + _time | eval index_token = "index=" + index | stats values(time_token) AS time_token values(index_token) AS index_token | eval time_token=mvjoin(time_token," OR ") | eval index_token=mvjoin(index_token," OR ") | append [ inputlookup rules.csv | eval rule = if(boolean=="FALSE","\""+rule+"\"",rule) | return 10000 $rule] | eventstats first(time_token) AS time_token first(index_token) AS index_token | search rule=*       Thank you for any suggestions you have to get this search working.  
Thanks @livehybrid for the reply. However I have given the same but it is not working as expected. As I earlier said this is not by default JSON data we converted it by using KV_MODE = json in SH... ... See more...
Thanks @livehybrid for the reply. However I have given the same but it is not working as expected. As I earlier said this is not by default JSON data we converted it by using KV_MODE = json in SH... I think JSON is extracting at search time but I have given this in index time. That might be the reason this json_delete not working... can you please help me with any other alternative?
Hello @user487596 You should file support case for such issues.
Hi, Delimiter doesn't work here(.  the option only possible: index=_internal sourcetype IN ($ms2$) https://docs.splunk.com/Documentation/Splunk/9.0.3/DashStudio/inputMulti  
@livehybrid As I said in my query statement that when I am performing edit > source > save execution images are loading perfectly. Thats mean not an issue with the permission and requirement to chang... See more...
@livehybrid As I said in my query statement that when I am performing edit > source > save execution images are loading perfectly. Thats mean not an issue with the permission and requirement to change in web.conf. I am thinking the issue with the cache or drilldown.
So you would use == props.conf == [yourSourceType] TRANSFORMS-removeJsonKeys = removeJsonKeys1 == transforms.conf == [removeJsonKeys1] INGEST_EVAL = _raw=json_delete(_raw, "avg_ingress_latency_be",... See more...
So you would use == props.conf == [yourSourceType] TRANSFORMS-removeJsonKeys = removeJsonKeys1 == transforms.conf == [removeJsonKeys1] INGEST_EVAL = _raw=json_delete(_raw, "avg_ingress_latency_be", "avg_ingress_latency_fe", "request_state", "server_response_code" ) as json_delete takes an object (_raw) and a list of keys to delete. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @Karthikeya  This should be really easy to achieve by adding some simple props/transforms to your Indexers or HFs: == props.conf == [yourSourceType] TRANSFORMS-removeJsonKeys = removeJsonKeys1 ... See more...
Hi @Karthikeya  This should be really easy to achieve by adding some simple props/transforms to your Indexers or HFs: == props.conf == [yourSourceType] TRANSFORMS-removeJsonKeys = removeJsonKeys1 == transforms.conf == [removeJsonKeys1] INGEST_EVAL = _raw=json_delete(_raw, "key1", "nestedkey.subkey2")   You can also see how this would work in the UI, although obviously this isnt persistent.  Here is an example working to see: SPL | makeresults | eval _raw = "[{\"integrationName\":\"Opsgenie Edge Connector - Splunk\",\"alert\":{\"message\":\"[ThousandEyes] Alert for TMS Core Healthcheck\",\"id\":\"abc123xyz\"},\"action\":\"Create\"},{\"integrationName\":\"Opsgenie Edge Connector - Splunk\",\"alert\":{\"message\":\"[ThousandEyes] Alert for TMS Core Healthcheck\",\"id\":\"abc123xyz\"},\"action\":\"Close\"},{\"integrationName\":\"Opsgenie Edge Connector - Splunk\",\"alert\":{\"message\":\"[ThousandEyes] Alert for TMS Core Healthcheck\",\"id\":\"def456uvw\"},\"action\":\"Create\"},{\"integrationName\":\"Opsgenie Edge Connector - Splunk\",\"alert\":{\"message\":\"[ThousandEyes] Alert for TMS Core Healthcheck\",\"id\":\"def456uvw\"},\"action\":\"Close\"},{\"integrationName\":\"Opsgenie Edge Connector - Splunk\",\"alert\":{\"message\":\"[ThousandEyes] Alert for TMS Core Healthcheck\",\"id\":\"ghi789rst\"},\"action\":\"Create\"}]" | eval events=json_array_to_mv(_raw) | mvexpand events | eval _raw=events | fields _raw | eval _raw=json_delete(_raw, "integrationName", "alert.id") Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @uagraw01  Have you configured the "Dashboards Trusted Domains List" to allow the domain/URL of the image you are trying to load?  Check out https://docs.splunk.com/Documentation/Splunk/9.4.0/Ad... See more...
Hi @uagraw01  Have you configured the "Dashboards Trusted Domains List" to allow the domain/URL of the image you are trying to load?  Check out https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/ConfigureDashboardsTrustedDomainsList for details on how to set this up. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
The if function works like a ternary ? : operator in C. So the proper syntax for setting a field conditionally is like this: | eval field=if(something="something","value_when_true","value_when_false... See more...
The if function works like a ternary ? : operator in C. So the proper syntax for setting a field conditionally is like this: | eval field=if(something="something","value_when_true","value_when_false")  
Hi @Sultan77 , sorry, what do you mean with correlation with it? Ciao. Giuseppe
You may have encountered a case where you have to update the operating system version where Splunk resides, in this case Red Hat 7.x to 9.x, is there any consideration that should be taken into accou... See more...
You may have encountered a case where you have to update the operating system version where Splunk resides, in this case Red Hat 7.x to 9.x, is there any consideration that should be taken into account, considering that there are two instances that fulfill the indexer role and there is another cluster instance that manages both, the latter will not be updated. I was thinking of cloning each server and updating it in an isolated network, then exchanging them one by one in the production environment, you will know if that works or I should apply another strategy
I have Splunk 9.4 installed and this is the file in the root $SPLUNK_HOME folder. splunk-9.4.0-6b4ebe426ca6-windows-x64-manifest That file name changes based on the version you have installed.  Ins... See more...
I have Splunk 9.4 installed and this is the file in the root $SPLUNK_HOME folder. splunk-9.4.0-6b4ebe426ca6-windows-x64-manifest That file name changes based on the version you have installed.  Inside that file is a list of ~30K files with expected owner and permissions.  The health check verifies against all of the contents but only on Splunk restart.
No response for this issue from Splunk. I am probably going to write a bug report this week and see if that gets any traction.
The COALESCE did the trick.  You are awesome.  Thanks for all of the help.  I can finally get a good nights rest. Thanks, Tom
@livehybrid  For your information. I have changed the kvstore port from 8191 to 8192 and its start working properly since then.
Hello Splunkers!! I have a Splunk dashboard where I am using a drilldown to dynamically load images from a media server. However, the images do not load initially. The strange part is that when ... See more...
Hello Splunkers!! I have a Splunk dashboard where I am using a drilldown to dynamically load images from a media server. However, the images do not load initially. The strange part is that when I go to Edit > Source and then simply Save the dashboard again (without making any changes), the images start loading correctly. Why is this happening, and how can I permanently fix this issue without needing to manually edit and save the dashboard every time? Any insights or solutions would be greatly appreciated! Always getting below error. After performing Edit > Source action. Images are loading perfectly.  
Accepting this post as a solution since my "question" contains the solution and was really for information sharing purposes.
Please share your dashboard configuration source
Can you provide a sample?