All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi , Thanks in Advance, My json file . how to extract fields using props and transform configuration file. { "AAA": { "modified_files": [ "a/D:\\\\splunk\\\\A / ui/.env", "a/D:\\\\splunk\\... See more...
Hi , Thanks in Advance, My json file . how to extract fields using props and transform configuration file. { "AAA": { "modified_files": [ "a/D:\\\\splunk\\\\A / ui/.env", "a/D:\\\\splunk\\\\A / ui/.env.example", "a/D:\\\\splunk\\\\B / ui/.env", "a/D:\\\\splunk\\\\B / ui/.env.example" ] } }{ "BBB": { "modified_files": [ "a/D:\\\\splunk\\\\A / ui/.env", "a/D:\\\\splunk\\\\A / ui/.env.example", "a/D:\\\\splunk\\\\B / ui/.env", "a/D:\\\\splunk\\\\B / ui/.env.example" ] } }{ "CCC": { "modified_files": [ "a/D:\\\\splunk\\\\A / ui/.env", "a/D:\\\\splunk\\\\A / ui/.env.example", "a/D:\\\\splunk\\\\B / ui/.env", "a/D:\\\\splunk\\\\B / ui/.env.example" ] } }{ "DDD": { "modified_files": [ "a/D:\\\\splunk\\\\A / ui/.env", "a/D:\\\\splunk\\\\A / ui/.env.example", "a/D:\\\\splunk\\\\B / ui/.env", "a/D:\\\\splunk\\\\B / ui/.env.example" ] } }
hi need to calculate count and percentage of fields. orginal post here, the main issue is fields contain space or balnk (2 single quotation i have spl like below,      | eventstats count ... See more...
hi need to calculate count and percentage of fields. orginal post here, the main issue is fields contain space or balnk (2 single quotation i have spl like below,      | eventstats count as namecount by name | eventstats count as colorcount by color | eventstats count as statuscount by status | eventstats count(name) as nametotal | eventstats count(color) as colortotal | eventstats count(status) as statustotal | eval name=printf("%04u %s %d", 10000-namecount, name, nametotal) | eval color=printf("%04u %s %d", 10000-colorcount, color, colortotal) | eval status=printf("%04u %s %d", 10000-statuscount, status, statustotal) | stats values(name) as name values(color) as color values(status) as status | eval cname=mvmap(name,10000-tonumber(mvindex(split(name," "),0))) | eval ccolor=mvmap(color,10000-tonumber(mvindex(split(color," "),0))) | eval cstatus=mvmap(status,10000-tonumber(mvindex(split(status," "),0))) | eval pname=mvmap(name,100*(10000-tonumber(mvindex(split(name," "),0)))/tonumber(mvindex(split(name," "),2))) | eval pcolor=mvmap(color,100*(10000-tonumber(mvindex(split(color," "),0)))/tonumber(mvindex(split(color," "),2))) | eval pstatus=mvmap(status,100*(10000-tonumber(mvindex(split(status," "),0)))/tonumber(mvindex(split(status," "),2))) | eval name=mvmap(name,mvindex(split(name," "),1)) | eval color=mvmap(color,mvindex(split(color," "),1)) | eval status=mvmap(status,mvindex(split(status," "),1)) | fields name cname pname color ccolor pcolor status cstatus pstatus     i have some "date" or "color" like this: 'Mon May 30 00:00:00 USDT 2022' or '' FYI: some of them contain space between Single quotation like this 'Mon May 30 00:00:00 USDT 2022', some of them are empty just has Single quotation like this '' not show them correcty and won't calculate percentage of them.   current output: Date cDate %Date Color cColor %Color 'Mon         2                              ''           1 'Today'    1           33.0     'red'        2         66.0   expected output: Date                                                                           cDate %Date           Color cColor %Color 'Mon May 30 00:00:00 USDT 2022'         2         66.66                ''           1            33.3 'Today'                                                                            1           33.0               'red'        2         66.0
Hi, I have an SPL, which should exclude the ip values from 4 lookups. So i tried it with a subsearch approach. But this search takes a longer time than usual to run.  How can I optimize it?   in... See more...
Hi, I have an SPL, which should exclude the ip values from 4 lookups. So i tried it with a subsearch approach. But this search takes a longer time than usual to run.  How can I optimize it?   index=a OR b action=* attack!=N/A NOT (( [| inputlookup a.csv | fields ip | rename ip as srcip]) OR ( [| inputlookup b.csv | fields ip | rename ip as src_ip]) OR ( [| inputlookup c.csv | fields ip | rename ip as src_ip]) OR ( [| inputlookup d.csv | fields ip | rename ip as src_ip])) | stats dc(attack) as dc_attack by src_ip | where dc_attack >2 | dedup src_ip     Thanks in advance!
index="SOMETHING"  earliest=-30d@d | stats earliest(_time) as action_StartTime latest(_time) as action_EndTime | eval elapsed_Time= action_EndTime - action_StartTime | convert ctime(action_StartTi... See more...
index="SOMETHING"  earliest=-30d@d | stats earliest(_time) as action_StartTime latest(_time) as action_EndTime | eval elapsed_Time= action_EndTime - action_StartTime | convert ctime(action_StartTime) ctime(action_EndTime) ctime(elapsed_Time) | fields + action_StartTime action_EndTime elapsed_Time  | sort by action_StartTime The elapsed_Time is wrong, how can i make it correct?
hello I use the cron below in order to run the search “At minute 10 past every hour from 7 through 19.”   10 7-19 * * *    Instead doing this, I would like to run the search every 15 minute... See more...
hello I use the cron below in order to run the search “At minute 10 past every hour from 7 through 19.”   10 7-19 * * *    Instead doing this, I would like to run the search every 15 minutes always between 7h and 19h  could you help please?  
I would like to know if there is a way to check when the rsync and postgress sync of data from primary to standby is completed? It is required in the below scenario. 1. Warm standby is set up and ... See more...
I would like to know if there is a way to check when the rsync and postgress sync of data from primary to standby is completed? It is required in the below scenario. 1. Warm standby is set up and is working from instance A to instance B 2. Instance A goes down and phantom failed over to instance B 3. Now all the updates are happening on instance B. Assume that it continued this way for a number of days. 4. Now instance A is backup and we configure warm standby again with instance B as primary and instance A as standby to allow the updates done on instance B in the last few days to flow back to instance A. 5. This configuration only needs to be continued until all the updates are synced back to instance A from B after which we can go back to the original configuration with instance A as primary and instance B as standby. For that we need to know how do we verify if all the updates/data are synced from primary to standby?
To start - I was suggested this solution, but despite the fact that the question is very similar the answer marked as a solution doesn't seem to actually provide the quantitative total that I am look... See more...
To start - I was suggested this solution, but despite the fact that the question is very similar the answer marked as a solution doesn't seem to actually provide the quantitative total that I am looking for. I have a series of events where there is a Start and Stop time, in epoch time. These events can be grouped by a common field, `host`, and I am trying to determine the total amount of deduplicated time that these events span. For example: Host_1, Event_1: starts at 13:00, ends at 13:15 Host_1, Event_2: starts at 13:10, ends at 13:20 Host_1, Event_3: starts at 13:30, ends at 14:00 The total time for Host_1 would therefore be 50 minutes: Event_1: 15 minutes Event_2: 5 minutes (10 minutes - 5 minutes of overlap with Event_1) Event_3: 30 minutes (no overlap with any other events) Total: 50 minutes   I had tried to leverage streamstats to get information about previous events, but couldn't work out how to get it to properly reset when the events didn't overlap. Not even sure streamstats is the best method for solving this type of problem.   EDIT: some test data may be helpful. 0,"hostname","start_time","end_time" 1,"host_1","1654130041.626307","1654130566.626307" 2,"host_1","1654131696.975800","1654133451.975800" 3,"host_1","1654132454.687189","1654134263.687189" 4,"host_1","1654132747.975800","1654133451.975800" 5,"host_1","1654133805.740912","1654134236.740912" 6,"host_1","1654136688.170093","1654136722.170093" 7,"host_1","1654136782.300892","1654136818.300892" 8,"host_1","1654136885.031861","1654137288.031861" 9,"host_1","1654137388.801936","1654139394.801936"   Doing the math, rows numbered 3 and 4 both have `start_time` values that are earlier than row 1's `end_time` value - indicating that there would be a duration overlap occurring in several rows.
Hi Guys I am looking for ways to alert when the memory usage rise or dip. Can you please kindly teach on what MLTK that I should use for this case. Thank you!
Hi There,  I am trying to generate a choropleth map of US using the following command : | iplocation final_ip |search Country = "United States" |stats count as volume by Region |rename Region ... See more...
Hi There,  I am trying to generate a choropleth map of US using the following command : | iplocation final_ip |search Country = "United States" |stats count as volume by Region |rename Region as state |dedup state |table state volume |geom geo_us_states featureIdField="state" allFeatures=True This gives a response with the fields state, volume, featureCollection, geom and but the map is still empty. Using geostats instead and then doing lookup, does give map but count aka volume is very low  . Can you help please ? @
Y'all, I have events from a windows eventlog and the application writes time with ms precision into the Message field of the event, along with other app data. So the Message field looks like: Mes... See more...
Y'all, I have events from a windows eventlog and the application writes time with ms precision into the Message field of the event, along with other app data. So the Message field looks like: Message=2022-05-05 22:34:11.756|lots|of|app|logging|pipe|seperated matching the strftime format "%Y-%m-%d %H:%M:%S.%3N"  I have this in my props.conf (There are no "quotation marks" around the message value in event) [WinEventLog:RPA] TIME_PREFIX = Message= TIME_FORMAT = %Y-%m-%d %H:%M:%S.%3N SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n](?=\d{2}/\d{2}/\d{2,4} \d{2}:\d{2}:\d{2} [aApPmM]{2})) But I still get all events tagged with the time (probably from the win event time) with second accuracy, not ms. I check that with a search like index=my_app_events source="WinEventLog:MyApp"  | convert timeformat="%Y-%m-%d %H:%M:%S.%3N" ctime(_time) | table _time Message Is it possible to have ms accuracy in _time (I saw somewhere that it was second granularity) or am I missing something else? Thanks In Advance, R. P.S. For bonus points in some error cases the Message= field will contain error data with no timestamp. In such a case of course I want to fall back the the Windows Event timestamp 
I have a macro that starts with a search command.  When I ran it, I noticed I was getting a different number of results than if I just ran the raw SPL vs using the macro. As an example my macro was... See more...
I have a macro that starts with a search command.  When I ran it, I noticed I was getting a different number of results than if I just ran the raw SPL vs using the macro. As an example my macro was named open_vulnerabilities and the SPL was:       search index="vulnerabilities" severity_id>=2 state!="fixed"       If I use the macro in the search bar like this:       `open_vulnerabilities`       I would get say 10 results.  But if I ran the full SPL (index="vulnerabilities"...) then I'd get 100 results. I ended up figuring out that if I used a | before the macro name, like this       | `open_vulnerabilities`        then I'd get the number of results I expected.  I just don't understand why.  If I got 0 results, then it would make some sense but the fact that it's returning 10 really has me stumped.  Any help would be greatly appreciated.  Thanks
Hi,   I am using a Case statement to create a new field whose values depend on certain other fields taking other value. So, the new field I am creating is called "XYZ". For events whose field "... See more...
Hi,   I am using a Case statement to create a new field whose values depend on certain other fields taking other value. So, the new field I am creating is called "XYZ". For events whose field "Planned Migration Completion Iteration" has a value beginning with "Decom by", then the "XYZ" field would have a value of "Done". Similarly,  for events whose field "Migration Comments" has a value equal to "In progress", then the "XYZ" field would have a value of "In progress". Finally, for all other scenarios, the XYZ takes the value of "Not Started". However, this is what my current Case statement looks like, where it ONLY outputs the "Not Started" case:   Can you please help? Many thanks
Is it possible to allow users the interactive capability of selecting between light and dark mode theme by way of an input field?  I tried: <form theme="$background_theme_token$> along with an in... See more...
Is it possible to allow users the interactive capability of selecting between light and dark mode theme by way of an input field?  I tried: <form theme="$background_theme_token$> along with an input field for that token ( light or dark selection ) and was not able to get it to work. Any suggestions or is this not doable in the Classic dashboard (XML) world? Thanks in advance, Bob
I am trying to do a search where by:   index=firewall (src_ip=172.16.0.0/12)  dest_ip!(172.16.0.0/12) | table src_ip src_port dest_ip dest_port | dedup src_ip   When I run this search I still... See more...
I am trying to do a search where by:   index=firewall (src_ip=172.16.0.0/12)  dest_ip!(172.16.0.0/12) | table src_ip src_port dest_ip dest_port | dedup src_ip   When I run this search I still see 172.16.0.0/12 destination IP addresses.  I've also tried it this way: index=firewall (src_ip=172.16.0.0/12) NOT  dest_ip! IN (172.16.0.0/12) | table src_ip src_port dest_ip dest_port | dedup src_ip
Hi everyone I am currently getting logs from microsoft 365 and one of its panels shows the impossible simultaneous locations. When I check the IP with any page for whois like virustotal or abusei... See more...
Hi everyone I am currently getting logs from microsoft 365 and one of its panels shows the impossible simultaneous locations. When I check the IP with any page for whois like virustotal or abuseipdb, for example, from Sweden, I find that it really is from another country. Is there something wrong with the iplocation command or something I need to adjust How can it be solved?
Hi All, There are lots of forum topics here on this but I'm really struggling to get my head around it.  I have the following information in JSON:      { "4": { "state": { ... See more...
Hi All, There are lots of forum topics here on this but I'm really struggling to get my head around it.  I have the following information in JSON:      { "4": { "state": { "on": false, "bri": 254, "hue": 8418, "sat": 140, "effect": "none", "xy": [ 0.5053, 0.4152 ], "ct": 454, "alert": "select", "colormode": "ct", "mode": "homeautomation", "reachable": false }, "swupdate": { "state": "transferring", "lastinstall": "2020-03-03T14:19:37" }, "type": "Extended color light", "name": "Hue lightstrip plus 1", "modelid": "LST002", "manufacturername": "Signify Netherlands B.V.", "productname": "Hue lightstrip plus", "capabilities": { "certified": true, "control": { "mindimlevel": 40, "maxlumen": 1600, "colorgamuttype": "C", "colorgamut": [ [ 0.6915, 0.3083 ], [ 0.17, 0.7 ], [ 0.1532, 0.0475 ] ], "ct": { "min": 153, "max": 500 } }, "streaming": { "renderer": true, "proxy": true } }, "config": { "archetype": "huelightstrip", "function": "mixed", "direction": "omnidirectional", "startup": { "mode": "safety", "configured": true } }, "uniqueid": "00:17:88:01:04:06:ae:3d-0b", "swversion": "1.50.2_r30933", "swconfigid": "59F2C3A3", "productid": "Philips-LST002-1-LedStripsv3" }, "5": { "state": { "on": false, "bri": 144, "hue": 7676, "sat": 199, "effect": "none", "xy": [ 0.5016, 0.4151 ], "ct": 443, "alert": "select", "colormode": "xy", "mode": "homeautomation", "reachable": true }, "swupdate": { "state": "noupdates", "lastinstall": "2021-08-13T13:53:48" }, "type": "Extended color light", "name": "Upstairs Hall", "modelid": "LCT015", "manufacturername": "Signify Netherlands B.V.", "productname": "Hue color lamp", "capabilities": { "certified": true, "control": { "mindimlevel": 1000, "maxlumen": 806, "colorgamuttype": "C", "colorgamut": [ [ 0.6915, 0.3083 ], [ 0.17, 0.7 ], [ 0.1532, 0.0475 ] ], "ct": { "min": 153, "max": 500 } }, "streaming": { "renderer": true, "proxy": true } }, "config": { "archetype": "sultanbulb", "function": "mixed", "direction": "omnidirectional", "startup": { "mode": "safety", "configured": true } }, "uniqueid": "00:17:88:01:04:ff:49:53-0b", "swversion": "1.88.1", "swconfigid": "76B74E79", "productid": "Philips-LCT015-1-A19ECLv5" },     I am wanting information for "4" and "5" to be ingested as separate events at index time. I understand that one could use regex to filter this properly, but honestly I'm struggling to wrap my head around how.  Any help would be massively appreciated. Many Thanks, John
I'm writing a piece of code that actually happens in a distributed system pipeline. I need to paginate on search result and I can't retrieve all the results in one go (the results may be bigger tha... See more...
I'm writing a piece of code that actually happens in a distributed system pipeline. I need to paginate on search result and I can't retrieve all the results in one go (the results may be bigger than the batch_size that the components  can use). I've managed to use retrieve the old job by sid2='1654105815.20373' old_job = service.job(sid2)   However sometime the jobs expires and I do have problems getting the result. for result in results.JSONResultsReader(old_job.events(**{"count": total_size, "output_mode": "json","offset":800})): if isinstance(result, results.Message): print(result) elif isinstance(result, dict final_res.append(result) always does not return anything (no values )are present. Any advice?
Is there a way to change the order of the "stack_trace" attribute, so it shows up last within the log message ?
I didn't find the cloud documentation very clear... Do I need to install splunk enterprise separately to have heavy for warder and then configure my splunk cloud license? Do I need to ask splunk su... See more...
I didn't find the cloud documentation very clear... Do I need to install splunk enterprise separately to have heavy for warder and then configure my splunk cloud license? Do I need to ask splunk support for an enterprise license? After all, how do I configure a heavy forwarder? And what address do I put in Universal forwarder? From the IP or hostname cloud? I've read the following threads and it gets more and more confused: https://www.splunk.com/en_us/resources/videos/splunk-cloud-tutorial.html https://community.splunk.com/t5/Getting-Data-In/How-to-set-up-a-heavy-forwarder-to-forward-data-to-Splunk-Cloud/m-p/250588 https://docs.splunk.com/Documentation/SplunkCloud/8.2.2202/Admin/WindowsGDI  Step2 Can you help me please?   Regards
Hello I'm using the new dashboard studio, and have a couple of single value and single value + trending components. I want these to show a "0" when no results are available for a given component in... See more...
Hello I'm using the new dashboard studio, and have a couple of single value and single value + trending components. I want these to show a "0" when no results are available for a given component instead of the default icon:   the single value has a |timechart count at the end and shows results when available, however it shows the default when no results are available and is something I want to fix, to make it clearer to the user. While I tried with |fillnull , |fillnull value=0 count and even adding "shouldSparklineAcceptNullData":true in the code section for that single value, nothing seems to address this problem.   Any idea how can I have a 0 showing ? I have read multiple questions around the same topic, however no of the answers I found seems to work for me sadly. Thanks in advance for any help this awesome community can provide.