All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If there is no missing value for any key, you can potentially do something simple to achieve the simple goal of presenting aws_tags in <key>::<value> format: | tstats latest(cloudprovider.aws.tags{}... See more...
If there is no missing value for any key, you can potentially do something simple to achieve the simple goal of presenting aws_tags in <key>::<value> format: | tstats latest(cloudprovider.aws.tags{}.key) as key latest(cloudprovider.aws.tags{}.value) as value where <your filter> by assetId | eval idx = range(0, mvcount(key)) | eval AWS_TAGS = mvmap(idx, mvindex(key, idx) . "::" . mvindex(value, idx)) | fields - key value  
This requires some discovery work from your organisation or network engineer, Splunk can't magically work out a network's connections unless you have data that states so.  An option might be to inge... See more...
This requires some discovery work from your organisation or network engineer, Splunk can't magically work out a network's connections unless you have data that states so.  An option might be to ingest the data from these components and ensure the source_ip and destination_ip data is ingested and that may help you see the traffic flow  and you can use a Splunk app with the discovery work.  like https://splunkbase.splunk.com/app/6876    Or you might want to look at, other third-party tools for that and perhaps then create a look up file that contains this information, so its then presented to Splunk and to help you with your use case.      
I want to monitor Splunk Enterprise in a cluster environment. I monitor the Splunk infrastructure with Newrelic, and I also want to use the DMC health check item. Where can I get the health check it... See more...
I want to monitor Splunk Enterprise in a cluster environment. I monitor the Splunk infrastructure with Newrelic, and I also want to use the DMC health check item. Where can I get the health check item other than by updating it? Also, please let me know if there are any other ways to monitor Splunk.
Hi @CMEOGNAD  Assuming that today's date will always be the first element of result and tomorrow the second, you can do this: ```adding sample data``` | makeresults | eval _raw="{ \"result\" ... See more...
Hi @CMEOGNAD  Assuming that today's date will always be the first element of result and tomorrow the second, you can do this: ```adding sample data``` | makeresults | eval _raw="{ \"result\" : [ {\"2024-06-10\" : 1338}, {\"2024-06-11\" : 1715} ] }" ```using spath to extract values``` | spath output=today path=result{0}. | rex field=today "\{\"(?<todayDate>[^\"]+)\"\s\:\s(?<todayResult>\d+)" | spath output=tomorrow path=result{1}. | rex field=today "\{\"(?<tomorrowDate>[^\"]+)\"\s\:\s(?<tomorrowResult>\d+)" This gets you the todayResult and tomorrowResult values extracted with regex. Ideally, you could extract the values directly with spath, but it seems it's not possible to use a variable for the path in spath, e.g.  | eval today=tostring(strftime(_time,"%Y-%m-%d")) | spath output=today path=result{0}. | spath input=today output=today path='today'  hope this helps.
@ITWhisperer didnt worked 
So this apps works with the conjunction of the saved Search.  Risk Factors - Lookup Gen -check if the search is running as expected and see if the lookup contents are being populated.  Also, fo... See more...
So this apps works with the conjunction of the saved Search.  Risk Factors - Lookup Gen -check if the search is running as expected and see if the lookup contents are being populated.  Also, for the visualizations to be populated, you would need to install the following 2 apps  * Treemap: https://splunkbase.splunk.com/app/3118 * SanKey: https://splunkbase.splunk.com/app/3112
What's the Java version on the host machine? I think it should be >= 1.8 Also, try providing the absolute path of java (i.e. >= 1.8) when starting the machine agent. regards
I see you asked this in Slack, but you can use foreach on your final data example, there could be a better way to work it out in the foreach. Not sure what you want to do about the host name prefix, ... See more...
I see you asked this in Slack, but you can use foreach on your final data example, there could be a better way to work it out in the foreach. Not sure what you want to do about the host name prefix, but if it's fixed you can add it back   | makeresults format=csv data="host_list abc0002 abc0003 abc0004 abc0005 abc0006 abc0007 abc0008 abc0009 abc0010 abc0011 abc0012 abc0013 abc0014 abc0015 abc0016 abc0017 abc0018 abc0019 abc0020 abc0022 abc0024 abc0025 abc0026 abc0027 abc0028 abc0029 abc0031" | eval test="new" | stats values(host_list) as host_list by test ``` Above is creating your example data ``` ``` Get the numeric part ``` | rex field=host_list max_match=0 "(?<prefix>[^0-9]*)(?<id>\d+)" | eval c=0 | foreach id mode=multivalue [ eval n=<<ITEM>>, diff=n-prev, ss=case(isnull(ss), mvindex(prefix, c).<<ITEM>>, diff>1, mvappend(ss, mvindex(prefix, c).<<ITEM>>), true(), ss), ee=case(isnull(ss), null(), diff>1, mvappend(ee, r), true(), ee), r=mvindex(prefix, c).<<ITEM>>, prev=n, c=c+1 ] | eval ee=mvappend(ee, r) | eval ranges=mvzip(ss, ee, "-") | fields - diff id n prev r ss ee c    
Hi @power12 try something like this (assuming the host names all follow the same format) index=abc | rex field=host "(?<hostname>\w+)(?<hostnum>\d+)" | eval hostnum=tonumber(hostnum) | eval hos... See more...
Hi @power12 try something like this (assuming the host names all follow the same format) index=abc | rex field=host "(?<hostname>\w+)(?<hostnum>\d+)" | eval hostnum=tonumber(hostnum) | eval hostgroup=case(hostnum>=2 AND hostnum<=20, "group1", hostnum=22, "group2", hostnum>=24 AND hostnum<=29, "group3", hostnum=31, "group4") | stats count by host test hostgroup | stats count as total_count values(host) as host_list by test, hostgroup
Hi @ClubMed  The only way to do this with tstats might be to get the fields extracted in a datamodel first, however I suspect that might defeat the purpose of using tstats as it would be slower than... See more...
Hi @ClubMed  The only way to do this with tstats might be to get the fields extracted in a datamodel first, however I suspect that might defeat the purpose of using tstats as it would be slower than just using your original search.  Another option might be to save your original search as a scheduled report which dumps the key/value/assetid data into a lookup which you could quickly retrieve with | inputlookup.
Couple of minor edits - latest is exclusive and format should be MM/DD/YYYY:HH:MM:SS, so it would be  index="aws" sourcetype="dev" (earliest=-1y latest="01/01/2024:00:00:00" "A" OR "B") OR (earliest... See more...
Couple of minor edits - latest is exclusive and format should be MM/DD/YYYY:HH:MM:SS, so it would be  index="aws" sourcetype="dev" (earliest=-1y latest="01/01/2024:00:00:00" "A" OR "B") OR (earliest="01/01/2024:00:00:00" latest=now "C" OR "D")  
I have a search that outputs the hostlist by test. index=abc | stats count by host test | stats count as total_count values(host) as host_list by test which gives me list of hosts by test like bel... See more...
I have a search that outputs the hostlist by test. index=abc | stats count by host test | stats count as total_count values(host) as host_list by test which gives me list of hosts by test like below  test host_list new abc0002 abc0003 abc0004 abc0005 abc0006 abc0007 abc0008 abc0009 abc0010 abc0011 abc0012 abc0013 abc0014 abc0015 abc0016 abc0017 abc0018 abc0019 abc0020 abc0022 abc0024 abc0025 abc0026 abc0027 abc0028 abc0029 abc0031   II would like to group the range of host like [abc0002-abc0020] [abc0022] [abc0024-abc0029] [abc0031] instead of the whole list  by test like below image  test host_list host_array          new abc0002 abc0003 abc0004 abc0005 abc0006 abc0007 abc0008 abc0009 abc0010 abc0011 abc0012 abc0013 abc0014 abc0015 abc0016 abc0017 abc0018 abc0019 abc0020 abc0022 abc0024 abc0025 abc0026 abc0027 abc0028 abc0029 abc0031 [abc0002-abc0020] [abc0022] [abc0024-abc0029] [abc0031]     Thank you in Advance Splunkers 
Hi AppD SME's, When we enable the RUM for the X application, what options are available to pull the user count details? For example: 1. Can we get the number of users for a given time in the appli... See more...
Hi AppD SME's, When we enable the RUM for the X application, what options are available to pull the user count details? For example: 1. Can we get the number of users for a given time in the application? not in accordance with time intervals 2. Is possible to fetch concurrent users details from RUM ? I would really appreciate your assistance and would appreciate your insights. Thanks, MSK
Can you see your private messages if you don't mind
Hi @KhalidAlharthi try this in props.conf (on indexer or HF) PREAMBLE_REGEX = \w{3}\s(\d{2}[\s\:]){4}(\d{1,3}\.){3}\d{1,3}\s\w{3}\s(\d{2}[\s\:]){4}[^\s]+\s
Hi, I have the following JSON object that is indexed via the default JSON extraction (INDEXED_EXTRACTIONS) { "assetId": 123456, "cloudProvider": { "aws": { "ec2": { ... See more...
Hi, I have the following JSON object that is indexed via the default JSON extraction (INDEXED_EXTRACTIONS) { "assetId": 123456, "cloudProvider": { "aws": { "ec2": { ... }, "tags": [ { "key": "AAA", "value": "aaa" }, { "key": "BBB", "value": "bbb" }, { "key": "CCC", "value": "ccc" } ] } } }   I'm attempting to re-write the following original search into tstats:   ... | spath output=AWS_TAGS path="cloudProvider.aws" | latest(AWS_TAGS) AS AWS_TAGS by assetId | spath input=AWS_TAGS output=AWS_TAGS path="tags{}" | eval AWS_TAGS=mvmap(AWS_TAGS,spath(AWS_TAGS,"key")."::".spath(AWS_TAGS,"value"))   This creates the AWS_TAGS multivalue list with the result like this for each assetId: AAA::aaa BBB::bbb CCC::ccc The issue with tstats is that the JSON object found at the path 'cloudProvider.aws' does not exist with tstats. I.e. there's no JSON object value for the TERM(cloudprovider.aws) That's why my original search had an spath, to explicitly grab the JSON object at 'cloudprovider.aws'. This way it allowed me to achieve latest tags for each assetId and preserve the key-value pairs with mvmap. With tstats, it only sees the terms cloudprovider.aws.tags{}.key and cloudprovider.aws.tags{}.value Which I could do with tstats values() but it may or may NOT be latest. Plus it will be tricky to line up them as key-value pairs. I definitely get the fact that tstats looks for terms in tsidx files so _raw is not searched. I guess the ask here is, any idea how to get the cloudprovider.aws JSON object extracted for tstats at searchtime?  
Hi @syk19567 would something like this work? (Replace timestamps with epoch time) index="aws" sourcetype="dev" (earliest=-1y latest="2023/12/31 23:59:59" "A" OR "B") OR (earliest="2024/01/01 00:00:0... See more...
Hi @syk19567 would something like this work? (Replace timestamps with epoch time) index="aws" sourcetype="dev" (earliest=-1y latest="2023/12/31 23:59:59" "A" OR "B") OR (earliest="2024/01/01 00:00:00" latest=now "C" OR "D")
@KendallW Thanks for responding to this matter    could you please give example cuz i don't understand it quite good . for example this log  Jul 14 14:15:56 10.128.213.50 Jul 14 14:15:56 my-host-... See more...
@KendallW Thanks for responding to this matter    could you please give example cuz i don't understand it quite good . for example this log  Jul 14 14:15:56 10.128.213.50 Jul 14 14:15:56 my-host-int02 snmpd[7777]: Received SNMP packet(s) from UDP: [10.128.30.20]:54900   i want to remove the timestamp and host at the beginning of the event    this happened because the non syslog source type i guess and i want this to be removed
Hi @KhalidAlharthi  You can do this with PREAMBLE_REGEX in props.conf PREAMBLE_REGEX = <regex> * A regular expression that lets Splunk software ignore "preamble lines", or lines that occur before... See more...
Hi @KhalidAlharthi  You can do this with PREAMBLE_REGEX in props.conf PREAMBLE_REGEX = <regex> * A regular expression that lets Splunk software ignore "preamble lines", or lines that occur before lines that represent structured data. * When set, Splunk software ignores these preamble lines, based on the pattern you specify. * Default: not set
I found it in this folder structure --> C:\Program Files\Splunk\etc\system\default\web.conf