All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can do this by turning your number to a sequence of one or two spaces and then use a colorPalette expression to set the colours. This example will turn a count of either 0 or 1 to 1 space or 2 sp... See more...
You can do this by turning your number to a sequence of one or two spaces and then use a colorPalette expression to set the colours. This example will turn a count of either 0 or 1 to 1 space or 2 spaces and then in the colorPalette expression it just makes a double space red (#ff0000) otherwise green. Unless you actually need the number for drilldown purposes this should do it. <dashboard> <label>colour2</label> <row> <panel> <table> <search> <query>| makeresults count=6 | fields - _time | eval count=random() % 2, orig_count=count | eval count=substr(" ", 1, count + 1) | table count orig_count</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="count"> <colorPalette type="expression">if (value == " ", "#FF0000", "#00FF00")</colorPalette> </format> </table> </panel> </row> </dashboard>  
Is there a way to pass these values from a file? -d name=firstApiTest \ -d disabled=1 \ -d owner=nobody \ -d description=descritionText \ -d search="index=main" \ -d dispatch.index_earliest=-7... See more...
Is there a way to pass these values from a file? -d name=firstApiTest \ -d disabled=1 \ -d owner=nobody \ -d description=descritionText \ -d search="index=main" \ -d dispatch.index_earliest=-7d \ -d dispatch.index_latestlatest=now
Hello Folks, Good Morning to one and all, I have Trend Micro Cloud one service, and i want to integrate those service with Splunk instance which has been placed on cloud. Kindly suggest the mechan... See more...
Hello Folks, Good Morning to one and all, I have Trend Micro Cloud one service, and i want to integrate those service with Splunk instance which has been placed on cloud. Kindly suggest the mechanism for this, as i have checked there is no add on available for this. As i know trend Micro Cloud one have the ability to forward the logs via Syslog mechanism & the Splunk instance on cloud, then what will be the Splunk interface for syslog on cloud for this integration. Please share your opinion on this.   Regards, Gautam Khillare(GK)
Splunk seems to have a problem with authenticating a SAML user account using a token. The purpose of using token authentication is to allow an external application to run a search and get the result... See more...
Splunk seems to have a problem with authenticating a SAML user account using a token. The purpose of using token authentication is to allow an external application to run a search and get the results. A sample script is posted on GitHub as a code gist — the script simply starts a search but does not wait for the results. The problem is that when token authentication is used with a SAML account, it only works when that SAML user is logged in on the Splunk web GUI and while the interactive session is (still) valid. The problem is shown in the internal log:   07-03-2023 19:35:53.931 +0000 ERROR Saml [795668 AttrQueryRequestExecutorWorker-0] - No status code found in SamlResponse, Not a valid status. 07-03-2023 19:35:53.901 +0000 ERROR Saml [795669 AttrQueryRequestExecutorWorker-1] - No status code found in SamlResponse, Not a valid status.   The theory on the failure is: The token authentication works with (within) Splunk; But Splunk needs to perform RBAC after authentication. So it does AQR after the authentication; However, when there is no valid, live SAML session, the AQR fails. (AQR = Attribute Query Request) -- in this case, to get the user's group memberships to map to Splunk roles. I wonder if anyone has been able to get token authentication to work for a SAML account? [Edit]: On the other hand, is it simply impossible to use token authentication with a SAML user account?
Glad that you have managed to resolve the issue like me. I went through high & low searching for the solutions as well. Luckily managed to Google out the link I sent you and resolved the issue. @d... See more...
Glad that you have managed to resolve the issue like me. I went through high & low searching for the solutions as well. Luckily managed to Google out the link I sent you and resolved the issue. @dwthomas16 Happy Splunking.
Hi    I want to know that what will happen after splunk universal forwarder reached throughput limit, because i found my universal forwarder is stop ingest the data at a certain monment every day, a... See more...
Hi    I want to know that what will happen after splunk universal forwarder reached throughput limit, because i found my universal forwarder is stop ingest the data at a certain monment every day, and i don't know waht happend here, and i just set up the thruput in limits.conf, and restart the UF, the remain data will be collected,  although i'm not sure if it will still be effective next time... so the throughput limit reached, the Splunk UF will stop collecting data until next restart?   
I need to run a curl command to run various tasks such as creating searches, accessing searches etc. I have the below command which works perfectly   curl -k -u admin:test12345 https://127.0.0.1:8... See more...
I need to run a curl command to run various tasks such as creating searches, accessing searches etc. I have the below command which works perfectly   curl -k -u admin:test12345 https://127.0.0.1:8089/services/saved/searches/ \ -d name=test_durable \ -d cron_schedule="*/15 * * * *" \ -d description="This test job is a durable saved search" \ -d dispatch.earliest_time="-15h@h" -d dispatch.latest_time=now \ --data-urlencode search="search index=_audit sourcetype=audittrail | stats count by host"   but given that I may have to craft various curl commands with different -d flags, I want to be able to pass values through a file so I used below command   curl -k -u admin:test12345 https://127.0.0.1:8089/services/saved/searches/ --data-binary data.json   where data.json looks like this { "name": "test_durable", "cron_schedule": "*/15 * * * *", "description": "This test job is a durable saved search", "dispatch.earliest_time": "-15h@h", "dispatch.latest_time": "now", "search": "search index=_audit sourcetype=audittrail | stats count by host" } but in doing so I get following error   <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Cannot perform action "POST" without a target name to act on.</msg> </messages> </response>   So after going through lot of different posts on this topic, I realised Splunk seems to have problem with json format or mainly extracting the 'name' attribute from json format. Can someone please assist with how I can craft Curl command that uses data from a file like I am using above and get correct response from Splunk ?
Hi @gcusello, I setup 2 vms one is a Splunk Enterprise instance and the other is the Universal forwarder. I was able to figure it out I had something wrong in my inputs.conf in the forwarder. Than... See more...
Hi @gcusello, I setup 2 vms one is a Splunk Enterprise instance and the other is the Universal forwarder. I was able to figure it out I had something wrong in my inputs.conf in the forwarder. Thank you for clarifying me regarding the trial license as I had that doubt. Cheers.
hi, I am facing same issue. did you end up fixing it?
Hello, I was aware that splunk is very versatile application which allows the users to manipulate the data is many ways.  I have extracted the fields of event_name, task_id , event_id. I am trying t... See more...
Hello, I was aware that splunk is very versatile application which allows the users to manipulate the data is many ways.  I have extracted the fields of event_name, task_id , event_id. I am trying to create an alert if there is an increment in the event_id for the same task_id & event_name when latest even arrives in the splunk. For example, event at 3:36:40.395 PM have the task_id which is 3  & event_id which is 1223680  AND  the latest even arrived at 3:52:40.395 PM which have task_id 3 & event_id which is 1223681 I am trying to create an alert because for the same task_id (3), event_name (server_state) there is an increment in event_id. I believe it is only possible if we store the previous event_id in a variable for the same event_name & task_id so that we can compare it with the new event_id. However, we have four different task_id, I am not sure how save the event_id for all those different task_id's. Any help would be appreciated.   Log File Explanation:   8/01/2023 3:52:40.395 PM server_state|3 1123681 5 Date Timestamp event_name|task_id event_id random_number     Sample Log file:   8/01/2023 3:52:40.395 PM server_state|3 1223681 5 8/01/2023 3:50:40.395 PM server_state|2 1201257 3 8/01/2023 3:45:40.395 PM server_state|1 1135465 2 8/01/2023 3:41:40.395 PM server_state|0 1545468 5 8/01/2023 3:36:40.395 PM server_state|3 1223680 0 8/01/2023 3:25:40.395 PM server_state|2 1201256 2 8/01/2023 3:15:40.395 PM server_state|1 1135464 3 8/01/2023 3:10:40.395 PM server_state|0 1545467 8     Thank You
Are all these numbers in a single field or part of a larger raw event. Assuming these are in a single field in the event, then simply | eval numbers=split(your_big_long_numbers_field, ",") which wi... See more...
Are all these numbers in a single field or part of a larger raw event. Assuming these are in a single field in the event, then simply | eval numbers=split(your_big_long_numbers_field, ",") which will make a new field called numbers which will contain a multivalue field with all your split numbers in. If you then want to make a new row for each of those numbers, use | mvexpand numbers
Have the dashboards in different tabs in the browser and use a browser tab cycler to cycle between the tabs?  
Is the dashboard search using tokens in the search?
You will have to the the use the colorPalette expression syntax as in the example below - you can simply copy this XML row into an existing dashboard to see how it works - it's a dummy search that ju... See more...
You will have to the the use the colorPalette expression syntax as in the example below - you can simply copy this XML row into an existing dashboard to see how it works - it's a dummy search that just creates a random time and when it's in the out of hours range it goes red. <row> <panel> <table> <title>Turning the Time column red if outside hours 18:00 to 06:00</title> <search> <query>| makeresults | eval _time=now() - (random() % 86400) | eval Date=strftime(_time, "%F"), Time=strftime(_time, "%T") | eval EventCode=4624, Account_Name="user ".(random() % 10) | table Date Time EventCode Account_Name</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="Time"> <colorPalette type="expression">if(tonumber(substr(value,1,2))&gt;=18 OR tonumber(substr(value,1,2))&lt;6, "#FF0000", "#FFFFFF")</colorPalette> </format> </table> </panel> </row>  
In your outer search  index=firstindex Email_Address remove the word "Email_Address" - I assume you want to look for a field that is called Email_Address in the firstIndex data using the values com... See more...
In your outer search  index=firstindex Email_Address remove the word "Email_Address" - I assume you want to look for a field that is called Email_Address in the firstIndex data using the values coming from the subsearch, but with this search you are looking for the WORD Email_Address as well as the value of the Email_Address FIELD  coming from the subsearch. You can see what a subsearch returns by running it on its own and using the | format specifier, e.g. index=secondindex user="dreamer" | fields Email_Address | head 1 | format  
OK, so if ALL your hosts are in the logs, you just need this index="index" source="C:\\Windows\\System32\\LogFiles\\Log.log" earliest=-45m latest=now | eval Detection=if(match(_raw, "Detection!"), ... See more...
OK, so if ALL your hosts are in the logs, you just need this index="index" source="C:\\Windows\\System32\\LogFiles\\Log.log" earliest=-45m latest=now | eval Detection=if(match(_raw, "Detection!"), 1, 0) | stats sum(Detection) as Detections by host This finds all events from Log.Log and then line 2 sets the value of a new field to 1 if the word "Detection!" is found in the event. Then the stats will add together all the Detection events for each host This is a key technique in Splunk for getting different sets of information from the same data, by first selecting ALL the data you want to consider and then using the eval statement (Splunk's Swiss Army knife) to set some indicator (in this case, determining if a particular event is the one you are really interested in counting) and then the stats just adds up all detections. This hosts that do NOT have the Detection! word, will always have Detection=0, so will end up with a Detections column value of 0 Hope this helps.  
So, try the suggestion - you only need the single search as I posted earlier, but with your updated search it should be like this index=dl* ("Error_MongoDB") OR ("Record_Inserted") | eval Status=if... See more...
So, try the suggestion - you only need the single search as I posted earlier, but with your updated search it should be like this index=dl* ("Error_MongoDB") OR ("Record_Inserted") | eval Status=if(match(_raw, "Error_MongoDB"), "Failure", "Success") | rename msg.attribute.ticketId as ticketId | timechart span=1d dc(ticketId) by Status | eval FailurePercentage = (Failure/Success)*100 | fillnull FailurePercentage You don't need to use all the fields/table commands - the timechart will remove all the unnecessary fields anyway
Sorry, still not sure I get it, you say partial matches of both A and B, so for your second example what are the rules there? field a = AAAAA\ABCDE-SS410009$ field b = A=AAAAA\ABCDE-SS410009,B=Domai... See more...
Sorry, still not sure I get it, you say partial matches of both A and B, so for your second example what are the rules there? field a = AAAAA\ABCDE-SS410009$ field b = A=AAAAA\ABCDE-SS410009,B=Domain,C=AB,D=XXX,E=NET Now I want to match  field a= AAAAA\ABCDE-SS410009 field b= AAAAA\ABCDE-SS410009 like this In the above, you show that all characters up to and excluding the final $ sign are found in B, so you appear to be showing the longest match of A found in B. So, if A had AAAAA\ABCDE-PP921234$ would you expect to see AAAAA\ABCDE as a match result and if A had BBBBB\ABCDE-SS410009$ would you expect to see ABCDE-SS410009 as a match  Also is the A= part in B related to field 'a'?
Hi @splunk_learn, you cannot use two Splunk instances because they have the same IP address and hostname. use two virtual machines. A trial license is a full feature license, so the issue isn't th... See more...
Hi @splunk_learn, you cannot use two Splunk instances because they have the same IP address and hostname. use two virtual machines. A trial license is a full feature license, so the issue isn't the license. Ciao. Giuseppe
Hi @rsannala, yes it's possible as described at https://docs.splunk.com/Documentation/Splunk/latest/Data/Advancedsourcetypeoverrides remember that you have to perform this transformation on the fir... See more...
Hi @rsannala, yes it's possible as described at https://docs.splunk.com/Documentation/Splunk/latest/Data/Advancedsourcetypeoverrides remember that you have to perform this transformation on the first Splunk full instance, an Heavy Forwarder (if present) or an Indexer. Ciao. Giuseppe