All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you @isoutamo for the help here. I have not yet implemented it, because i want to understand how resilient is whole solution. As fast as i understand we have two solutions: - forwarder ACK c... See more...
Thank you @isoutamo for the help here. I have not yet implemented it, because i want to understand how resilient is whole solution. As fast as i understand we have two solutions: - forwarder ACK configured in the outputs.conf (useAck=true) - HEC ACK configured in the inputs.conf (useAck=true) Both solutions are independent. But when i do enable only HEC ACK it effectively enables also forwarder ACK (because it can return ACK to HEC client based on the information returned back from indexer) In the HEC doc we have: "HEC responds with the status information to the client (4). The body of the reply contains the status of each of the requests that the client queried. A true status only indicates that the event that corresponds to that ackID was replicated at the desired replication factor" So effectively i need to enable useAck=true in inputs.conf - correct ? Also what happens when HEC server (my HF) will have a hardware crash before it received ACK from indexer (or even before it flush it's output queue) ? Will it be able to recover after that crash ? To do that it would have to have a kind of journal file system persistence in order to recover ? Without that if the event is lost my HEC client will try to query HEC server infinitively..... Thanks, Michal
How to get splunk billing usage data hourly and monthly through API's  
I try to plot a line graph where the x-axis is an index  and y-axis is a random value. I also trying to add an annotation where the annotationX is an index. Below is the code for the visualization. ... See more...
I try to plot a line graph where the x-axis is an index  and y-axis is a random value. I also trying to add an annotation where the annotationX is an index. Below is the code for the visualization.     "visualizations": { "viz_kHEXe45c": { "type": "splunk.area", "dataSources": { "primary": "ds_Search_1", "annotation": "ds_annotation_markers" }, "options": { "x": "> primary | seriesByIndex(0)", "annotationX": "> annotation | seriesByIndex(0)", "annotationLabel": "> annotation | seriesByIndex(1)", "annotationColor": "> annotation | seriesByIndex(2)", "nullValueDisplay": "zero" }, "title": "Test Event Annotation", "showProgressBar": false, "showLastUpdated": false } }, "dataSources": { "ds_Search_1": { "type": "ds.search", "options": { "query": "| makeresults count=15\n| streamstats count\n| eval index=count\n| eval value=random()%100\n| fields index value" }, "name": "ds_Search_1" }, "ds_annotation_markers": { "type": "ds.search", "options": { "query": "| makeresults count=3\n| streamstats count\n| eval index=count\n| eval score = random()%3 +1\n| eval status = case(score=1,\"server error detected\", score=2, \"unknown user access\", score=3, \"status cleared\")\n| eval color = case(score=1,\"#f44271\", score=2, \"#f4a941\", score=3, \"#41f49a\")\n| table index status color" }, "name": "ds_annotation_markers" } },       Below is the line graph output shown based on the code above.   Could anyone please help how to add the annotation on the line graph when the x-axis is a non-time based number type?  
I don't quite follow you. but it seems like you only need a single drop down index=$index$ (source="/log/test.log" $host$ ) | rex field=name "(?<DB>[^\.]*)" | stats count by DB What is selecti... See more...
I don't quite follow you. but it seems like you only need a single drop down index=$index$ (source="/log/test.log" $host$ ) | rex field=name "(?<DB>[^\.]*)" | stats count by DB What is selecting your index and host fields Can you share more of your dashboard XML.  Note that the above doesn't do the rename or table, as they are not necessary - just use DB as the fields for label/value rather than Database.
I am not sure what you are missing here - if you want to also restrict the AD data then also add in the search constraint (index="wineventlog" AND sourcetype="wineventlog" AND EventCode=4740) OR (in... See more...
I am not sure what you are missing here - if you want to also restrict the AD data then also add in the search constraint (index="wineventlog" AND sourcetype="wineventlog" AND EventCode=4740) OR (index="activedirectory" AND sourcetype="ActiveDirectory" AND sAMAccountName=* AND OU="Test Users") | eval Account_Name = lower( coalesce( Account_Name, sAMAccountName)) | search Account_Name="test-user" There is nothing wrong with your logic as such, so at this point you will have a data stream contains two types of event - what are you now looking to do with this? I expect you are wanting to combine these data sets according to Account_Name, so you would typically do | stats values(*) as * by Account_Name but before doing that type of wildcard stats, limit the fields to what you want with a fields statement before it, i.e. | fields Account_Name a b c x y z
There are a number of ways to do this - the example below uses makeresults to create your example data  Simple way 1 - use eventstats to collect all networks for each server and then check if the re... See more...
There are a number of ways to do this - the example below uses makeresults to create your example data  Simple way 1 - use eventstats to collect all networks for each server and then check if the results contain fw-network-X where X is the network the server is on | makeresults format=csv data="server,network,firewall server-1,network-1,yes server-1,fw-network-1,yes server-2,network-2,no server-3,network-1,yes server-3,fw-network-1,yes server-4,network-2,no server-5,network-3,yes server-5,fw-network-3,yes" | fields - firewall ``` Above creates your example table ``` | eventstats values(network) as nws by server | eval firewall=if(nws="fw-".network OR match(network,"^fw-"), "yes", "no") | fields - nws | table server network firewall Depending on the subleties of your data, you may need to tweak the eval firewall statement  
Your search is written in a very strange way for Splunk SPL - so it makes it hard to understand what your data looks like and what you are actually trying to get to. Based on your posted search, thi... See more...
Your search is written in a very strange way for Splunk SPL - so it makes it hard to understand what your data looks like and what you are actually trying to get to. Based on your posted search, this is a more efficient replacement - try this search and see if this comes up with the same output as your basic search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) (repoter.dataloadingintiated) OR (task.dataloadedfromfiles NOT "error" NOT "end_point" NOT "failed_data") OR ("app.mefwebdata - jobintiated") | eval host=if(match(_raw, "(?i)app\.mefwebdata - jobintiated"), case(match(host_ip, "12.234"), "HOP"+substr(host, 120,24), match(host_ip, "10.123"), "HOM"+substr(host, 120,24)) + " - " + host_ip , null()) | eval FilesofDMA=if(match(_raw, "task\.dataloadedfromfiles"), 1, 0) | stats values(host) as "Host Data Details" values(Error) as Error values(local) as "Files created localley on AMP" sum(FilesofDMA) as "File sent to DMA" | appendpipe [ stats dc("Host Data Details") as count | eval Error="Job didn't run today" | where count==0 | table Error]  
So if a field is not “Cim compliant” doest that mean that it cannot be used in tstats?
Hello, We're having trial of Splunk Observability Cloud Service.   We tried to deploy the integration guided example (the Hipster Shop app).     Data graph can be seen in APM and Infrastructure, but... See more...
Hello, We're having trial of Splunk Observability Cloud Service.   We tried to deploy the integration guided example (the Hipster Shop app).     Data graph can be seen in APM and Infrastructure, but got error in all RUM dashboards: request to http://rum-api-service.o11y-rum/api/rum/v3/node-metrics failed, reason: getaddrinfo ENOTFOUND rum-api-service.o11y-rum   I’m afraid if I defined those RUM related environment variables incorrectly during the deployment: RUM_REALM=jp0 RUM_AUTH=<RUM token> RUM_APP_NAME=Hipster_Shop                                               ß arbitrary RUM_ENVIRONMENT=Hipster_Shop_Jump_Start              ß arbitrary   As we haven't bought the service yet, can't submit support ticket to Splunk support... Would anyone please help? Thanks and Regards  
In fact, from this document "https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/Consolidatedatafrommultiplehosts", I did not find that the second step needs to be executed.
Hi  I am trying to use earliest and latest on Date time  Could you please advise the right format to use , i am not sure the below spl format is correct Event Code="1234" AND earliest="5/8/2024:10... See more...
Hi  I am trying to use earliest and latest on Date time  Could you please advise the right format to use , i am not sure the below spl format is correct Event Code="1234" AND earliest="5/8/2024:10:07:20" latest="5/8/2024:10:17:20
@jaibalaraman search can be in any time zone. can you elaborate your question what you need exactly
Hello, How do I set a flag in based on field value in multiple row? For example: In the following table,  network-1 is set to yes because server-1 that is on network-1 is also on fw-network-1 that... See more...
Hello, How do I set a flag in based on field value in multiple row? For example: In the following table,  network-1 is set to yes because server-1 that is on network-1 is also on fw-network-1 that is behind a firewall.    Please suggest. Thank you!! server network firewall server-1 network-1 yes server-1 fw-network-1 yes server-2 network-2 no server-3 network-1 yes server-3 fw-network-1 yes server-4 network-2 no server-5 network-3 yes server-5 fw-network-3 yes
We would like to ask for help regarding the DB Connect for DB2, we are currently trying to connect the DB2 of an IBM I Server but to no avail, are there any method needs to be done first for a DB2 on... See more...
We would like to ask for help regarding the DB Connect for DB2, we are currently trying to connect the DB2 of an IBM I Server but to no avail, are there any method needs to be done first for a DB2 on IBM-I be able to successfully connect on SPLUNK?
Also the search can be done in  UTC or any time zone'
Splunk search  " EventCode="4688" AND earliest="5/8/2024:10:07:20" latest="5/8/2024:10:17:20 "  Could you please the time search is correct 
Hello @Josh.Varughese  Yes, the old version machineagent is only supported by the docker runtime but the latest MA is supported by the contatinerd. Please use the latest MA. Best Regards, Rajesh... See more...
Hello @Josh.Varughese  Yes, the old version machineagent is only supported by the docker runtime but the latest MA is supported by the contatinerd. Please use the latest MA. Best Regards, Rajesh Ganapavarapu
The tstats command only works with indexed fields.  If the field is not indexed and is not in a data model (same thing, really), then it can't be used with tstats.
I have a wineventlog index to alert on locked accounts (EventCode=4740), but want to limit this based on certain users.  The data for the wineventlog index is pretty limited, so it looks like I would... See more...
I have a wineventlog index to alert on locked accounts (EventCode=4740), but want to limit this based on certain users.  The data for the wineventlog index is pretty limited, so it looks like I would have to reference another index like activedirectory, that contains similar data.  I was thinking I could reference the "OU" field in the activedirectory index so that this is possible, but I'm struggling  on what I need to combine in order to make this search work.  I've looked at using coalesce, and can get results from both indexes/sourcetypes, but can't seem to just limit my search using EventCode=4740 and OU=Test Users Group. (index="wineventlog" AND sourcetype="wineventlog" AND EventCode=4740) OR (index="activedirectory" AND sourcetype="ActiveDirectory" AND sAMAccountName=*) | eval Account_Name = lower( coalesce( Account_Name, sAMAccountName)) | search Account_Name="test-user" Some of the key fields that I'm trying to reference from the indexes are as follows: index = wineventlog sourcetype = wineventlog EventCode=4740 Security_ID = domain\test-user Account_Name = test-user Account_Name = dc index = activedirectory sourcetype = ActiveDirectory Account_Name = test-user sAMAccountName = test-user OU = Test Users Group
I just stumbled upon this post while looking for something semi-unrelated. FWIW: There are some instances where it must be set to "true" in the .conf files. I had an issue back in Feb where queries ... See more...
I just stumbled upon this post while looking for something semi-unrelated. FWIW: There are some instances where it must be set to "true" in the .conf files. I had an issue back in Feb where queries were not displaying length of execution in Splunk 9.0.8.  Found a KB article in Splunk support that suggested it might be caused by a setting** in limits.conf that was set to "1" instead of "true". We changed it to "true" and that fixed it. We did a little digging with the rest API and found that it would return 1/0 for the configs, but when looking at the .confs, they were written as true/false. **I won't reference the setting so as to not upset the Splunk Gods who may hold support contracts sacred.