All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All,   I am trying to rename a data but it is giving me error. I am doing in this way. | rename "Data Time series* *errorcount=0" AS "Success"  but error is : Error in 'rename' command: Wildca... See more...
Hi All,   I am trying to rename a data but it is giving me error. I am doing in this way. | rename "Data Time series* *errorcount=0" AS "Success"  but error is : Error in 'rename' command: Wildcard mismatch: 'Data Time series* *errorcount=0' as 'Success'.   Log file: Data Time series :: DataTimeSeries{requestId='482-fd1e-47-49-bf9b99f8', errorcount=0,   Can you please help me with correct rename command.
Hi I have the table x, y1, y2 and plot them in the line chart. how can I find the value where the two lines cross ?  
How can organizations efficiently handle and extract relevant data, such as webcam activity, from Office 365 audit logs, particularly when leveraging tools like the "Splunk Add-on for Microsoft Offic... See more...
How can organizations efficiently handle and extract relevant data, such as webcam activity, from Office 365 audit logs, particularly when leveraging tools like the "Splunk Add-on for Microsoft Office 365"?
Color change only applies to numeric values.  Here is a simple example using your "over", "under" range translated into 1, 0. <form version="1.1" theme="light"> <label>color range</label> <descr... See more...
Color change only applies to numeric values.  Here is a simple example using your "over", "under" range translated into 1, 0. <form version="1.1" theme="light"> <label>color range</label> <description>https://community.splunk.com/t5/Splunk-Search/SingleId-color-change-in-dashboard/m-p/688284#M234673</description> <fieldset submitButton="false"> <input type="radio" token="value_tok" searchWhenChanged="true"> <label>Select value</label> <choice value="over">Over</choice> <choice value="under">Under</choice> <default>over</default> <initialValue>over</initialValue> </input> </fieldset> <row> <panel> <single> <search> <query>| makeresults | eval value = case("$value_tok$" == "over", "1", "$value_tok$" == "under", "0")</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="colorBy">value</option> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0xdc4e41"]</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> </single> </panel> </row> </form>
{"body":"2024-04-29T20:25:08.175779 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XX Logon Failed: Anonymous\n2024-04-29T20:25:10.190339 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah... See more...
{"body":"2024-04-29T20:25:08.175779 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XX Logon Failed: Anonymous\n2024-04-29T20:25:10.190339 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-29T20:25:10.241220 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-29T20:25:10.342343 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n","x-opt-sequence-number-epoch":-1,"x-opt-sequence-number":1599,"x-opt-offset":"3642132344","x-opt-enqueued-time":1714422318556} {"body":"2024-04-24T12:46:29.292880 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-24T12:46:34.634829 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Failed: Anonymous\n2024-04-24T12:46:34.651499 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-24T12:46:34.653643 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Failed: Anonymous\n2024-04-24T12:46:34.662636 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-24T12:46:34.712475 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-24T12:46:34.723543 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n2024-04-24T12:46:36.403615 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Failed: Anonymous\n","x-opt-sequence-number-epoch":-1,"x-opt-sequence-number":156626,"x-opt-offset":"3560527888816","x-opt-enqueued-time":1713962799368} {"body":"2024-04-24T01:04:30.375693 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Failed: Anonymous\n2024-04-24T01:04:35.034067 HTTPS REST-API 10.10.11.11:2132 XXX-XXX-XXX Logon Success: blah-blah-blah\n","x-opt-sequence-number-epoch":-1,"x-opt-sequence-number":156,"x-opt-offset":"355193796","x-opt-enqueued-time":171392067}     I have pasted my raw log samples in the above space. Can someone please help me to break these into multiple evnts using props.conf I wish to break the lines before each timestamp (highlighted).   Thanks, Ranjitha
HI @HappySplunker . Yes, we found some failures within the Eventhub Names and also within the consumer groups. But now it is working. Thanks 
Hi @NC_AS, probably the values in MACAddr1 and CL_MacAddr doesn't match. Did you unflagged the case sensitivity in the lookup definition? Then check if there are spaces in the lookup values. Ciao... See more...
Hi @NC_AS, probably the values in MACAddr1 and CL_MacAddr doesn't match. Did you unflagged the case sensitivity in the lookup definition? Then check if there are spaces in the lookup values. Ciao. Giuseppe
Thanks for the reply. @gcusello  I was setting up lookups from the GUI, from Setting>Lookup, uploading lookup files, defining them, and setting up automatic lookups (now only automatic lookups are u... See more...
Thanks for the reply. @gcusello  I was setting up lookups from the GUI, from Setting>Lookup, uploading lookup files, defining them, and setting up automatic lookups (now only automatic lookups are unset). Setting the permissions to global did not solve the problem. Also, when I entered the following in the search screen, the records associated with the value of value could be displayed in the results. | makeresults count=1 | eval value=“cc:00:00:ab:cd:99” | lookup “240520_Macaddr_glpi.csv” MACAddr1 AS value But also in the search screen index=“nc” | lookup “240520_Macaddr_glpi.csv” MACAddr1 AS CL_MacAddr Or index=“nc” | lookup “240520_Macaddr_glpi.csv” MACAddr1 I could not find any PC_Name or Status in the fields. The result is the same as a search without the lookup function. CL_MacAddr is a field with already defined MacAdress. Can you think of any other possible causes?   ★| makeresults count=1 | eval value="cc:00:00:ab:cd:99" | lookup "240520_Macaddr_glpi.csv" MACAddr1 AS value result It would be nice if they would also show PC_Name, etc. from the log MacAdress like this...   ★index=“nc” | lookup “240520_Macaddr_glpi.csv” MACAddr1 AS CL_MacAddr result. Fields such as PC_Name do not appear here.  
Hi @mythili , you could use an eval command to have the timestamp of the second event: | eval stop_time=strftime(_time+duration, "%Y-%m-%d %H:%M:%S.%2N") | table sys_id stop_time that runs also wi... See more...
Hi @mythili , you could use an eval command to have the timestamp of the second event: | eval stop_time=strftime(_time+duration, "%Y-%m-%d %H:%M:%S.%2N") | table sys_id stop_time that runs also with events with the same timestamp. Ciao. Giuseppe
Hi @dokaas_2 , as @richgalloway said, Splunk must be the owner of all its objects otherwise it cannor correctly run. The issue usually is how to permit to a non root user to read root objects in fi... See more...
Hi @dokaas_2 , as @richgalloway said, Splunk must be the owner of all its objects otherwise it cannor correctly run. The issue usually is how to permit to a non root user to read root objects in file monitoring or script executing and this is solved in many ways, but all the Splunk folders (executable, run, configurations and data) must be owned by splunk user. Ciao. Giuseppe
Hi @gcusello, I need the timestamp of the 2nd event in the transaction, i.e, the stop time.  When it showed empty value, I tested getting both the values and noticed this behavior.
Hi @SureshkumarD, To get the drilldown working from a field in a table you can use the drilldown options: "eventHandlers": [ { "type": "drilldown.customUrl", "options": { ... See more...
Hi @SureshkumarD, To get the drilldown working from a field in a table you can use the drilldown options: "eventHandlers": [ { "type": "drilldown.customUrl", "options": { "url": "$row.URL.value|n$", "newTab": true } } ]   You can set that up in the UI with the following:  
Hi @Chirag812 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Po... See more...
Hi @Chirag812 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @ravida , as I said, I never experienced this behavior and I'm using many custom Correlation Searches, Open a case to Splunk Support. Ciao. Giuseppe
Hi @AleZ214, three questions (where the third could be the anwer of the second): did you checked the grants and the user running your script? how do you run your script, did you used the script c... See more...
Hi @AleZ214, three questions (where the third could be the anwer of the second): did you checked the grants and the user running your script? how do you run your script, did you used the script command in inputs.conf? why did you created your script, isn'rt  the script to have the same thing in the Splunk_TA_nix (https://splunkbase.splunk.com/app/833) sufficient for you? in other words, if you cannot use the sript in the above app, see how it's managed and copy the approach for your. Ciao. Giuseppe
Hi @mythili , why do you need mvindex, if you want to take the first timestamp of the trandaction? usually the transaction command takes as timestamp the one from the first event in the correlated ... See more...
Hi @mythili , why do you need mvindex, if you want to take the first timestamp of the trandaction? usually the transaction command takes as timestamp the one from the first event in the correlated events. Ciao. Giuseppe
Hi @mythili , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi All, I am using transaction command to group events and get stop time of a device.  | transaction sys_id startswith="START" endswith="STOP" | eval stop_time=strftime(mvindex(sys_time,1), "%Y-... See more...
Hi All, I am using transaction command to group events and get stop time of a device.  | transaction sys_id startswith="START" endswith="STOP" | eval stop_time=strftime(mvindex(sys_time,1), "%Y-%m-%d %H:%M:%S.%2N") | table sys_id stop_time However, when a field has same value for startswith and endswith, (for example, sys_time is same for both) then, mvindex(sys_time,1) is empty whereas mvindex(sys_time,0) gives the value.  If the values are different, then it works fine. Does anyone have any idea on this behavior and on how to work around this to get the value regardless?
Hi @sumarri, I created a dummy search to mock up your data, and created a lookup with 104,000 entries: | makeresults count=140000 | streamstats count as id | eval account="account" . substr("000... See more...
Hi @sumarri, I created a dummy search to mock up your data, and created a lookup with 104,000 entries: | makeresults count=140000 | streamstats count as id | eval account="account" . substr("000000000".tostring(id),-6), keep="true" | table account, keep | outputlookup "accounts_to_keep.csv" This will be our lookup file, replicating what you have in your lookup. It has the account ID and a "keep" field, and that's it.   Next, I created a dummy search to generate a bunch of data, with accounts we don't care about and the 104,000 we do care about: | makeresults count=200000 | streamstats count as id | eval account="account" . substr("000000000".tostring(id),-6) | eval data=random()%10000, label="whatever", _time=relative_time(now(), "-" + tostring(random()%1000) + "m") | table account, data, label, _time   To use the lookup to identify the accounts we want to keep you can use this SPL: | inputlookup accounts_to_keep.csv append=t ``` use eventstats if stats messes up your data | eventstats values(keep) as keep by account ``` | stats values(*) as * by account | search keep="true" | fields - keep This add the contents of the lookup to the results (append=t) Then we use stats to combine the keep field with the events in the search If this messes up your data, you can run eventstats instead, but that may run into memory issues with massive result sets. Finally, we search for all the events where the keep field is set to "true" Depending on how big your lookup gets, you may want to make the lookup a KV store collection.
Hi @Ryan.Paredez , @Yousef.Raafat  Regarding the NullPointerException error mentioned earlier, as below: **NullPointerException Log:** ``` ERROR NFSMountMetricsTask-Linux Monitor - Exception occu... See more...
Hi @Ryan.Paredez , @Yousef.Raafat  Regarding the NullPointerException error mentioned earlier, as below: **NullPointerException Log:** ``` ERROR NFSMountMetricsTask-Linux Monitor - Exception occurred collecting NFS I/O metrics java.lang.NullPointerException: null ``` We investigated this issue with the AppD Support team and discovered that the `nfsiostat` command was returning null value. (nfsiostat <path>) **Debug Log:** ``` [Monitor-Task-Thread4] 17 May 2024 04:35:43,500 DEBUG CommandExecutor-Linux Monitor - Command Output: [Illegal value vlan401.st4-mons0-t-isi001.corp.apple.com:/ifs/rinst/app_shared_rinsUAT_in] ``` The path configured in `config.yml` should be the mounted path rather than the filesystem path. In my case, the correct path is "/ngs/app/rinst/rins_in", not the left one. see in below df command. **`df -h` Command Output:** ``` Filesystem Size Used Avail Use% Mounted on vlan401.st4-mons0-t-isi001.corp.apple.com:/ifs/rinst/app_shared_rinsUAT_in 1.4T 484G 910G 35% /ngs/app/rinst/rins_in ``` **Updated `config.yml`:** ``` mountedNFS: - fileSystem: "/ngs/app/rinst/rins_in" displayName: "NFS1" ``` After making this change and restarting the machine agent, Now i am able to successfully fetch the NFS metrics. Thanks & Regards, Shubham Kadam