All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi at all, I have a data flow in json format from one host that I ingest with HEC, so I have one host, one source and one sourcetype for all events. I would override the host, source and sourcetype... See more...
Hi at all, I have a data flow in json format from one host that I ingest with HEC, so I have one host, one source and one sourcetype for all events. I would override the host, source and sourcetype values based on regexes and I'm able to do this. The issue is that the data flow is an elaboration of an external systel (logstash) that takes raw logs (e.g. from linux systems) and saves them in a fields of the json format ("message") adding many other fields. So, after host, source and sourcetype overriding (that is fine working) I would remove all the extra contents in the events and maintain only the content of the message field (the raw logs). I'm able to do this, but the issue is that I'm not able to do both the transformations: in other words I'm able to override values but the extra contents removing doesn't work or I can remove extra contents but the overriding doesn't work. I have in my props. conf the following configurations: [logstash] # set host TRANSFORMS-sethost = set_hostname_logstash # set sourcetype Linux TRANSFORMS-setsourcetype_linux_audit = set_sourcetype_logstash_linux_audit # set source TRANSFORMS-setsource = set_source_logstash_linux # restoring original raw log [linux_audit] SEDCMD-raw_data_linux_audit = s/.*\"message\":\"([^\"]+).*/\1/g as you can see in the first stanza I override sourcetype from logstash to linux_audit and in the second I try to remove the extra contents using the linux audit sourcetype. If I use the logstash sourcetype also in the second stanza, the extra contents are removed, but the fields overriding (that runs using the extra contents) doesn't work. I also tried to setup a priority using the props.conf "priority" option with no luck. I also tried to use source for the first stanza because source usually has an higher priority than sourcetype, but with the same result. Can anyone give me an hint how to solve this issue? Thank you in advance. Ciao. Giuseppe
Hi @gayathrc , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @AL3Z , as @bowesmana said, this is a very frequesnt question in this Community and you'll find many resolutive answers to it (also from me and him!) that analyzed many different situations and U... See more...
Hi @AL3Z , as @bowesmana said, this is a very frequesnt question in this Community and you'll find many resolutive answers to it (also from me and him!) that analyzed many different situations and Use Cases. Anyway, in few words, you have to create a lookup (called e.g. perimeter.csv), with at list one column (host) and containing the list of hosts to monitor and then run a search like the following: | tstats count WHERE index=your_index BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0  Ciao. Giuseppe
@richgalloway I have added version 1.1 to all the XML dashboards and unrestricted all the older versions of jquery. But my question is, is there any impact if we do not fix all the suggested issues f... See more...
@richgalloway I have added version 1.1 to all the XML dashboards and unrestricted all the older versions of jquery. But my question is, is there any impact if we do not fix all the suggested issues for Python scripts? Is it mandatory to fix it? Does it impact on Splunk ?
Use the --data-urlencode option instead of -d (--data) curl -H "Authorization: Bearer <token ID here>" -k https://host.domain.com:8089/services/search/jobs --data-urlencode search='<your search term... See more...
Use the --data-urlencode option instead of -d (--data) curl -H "Authorization: Bearer <token ID here>" -k https://host.domain.com:8089/services/search/jobs --data-urlencode search='<your search term>' One more thing: SPL uses lots of double quotes.  Quote your search with single quotes saves you lots of escapes.
instead of values, you should see a field named values{} because that's an array.  Because you are only interested in numeric min, max, and avg, you only need to substitute this name   index="colle... See more...
instead of values, you should see a field named values{} because that's an array.  Because you are only interested in numeric min, max, and avg, you only need to substitute this name   index="collectd_test" plugin=disk type=disk_octets plugin_instance=$plugin_instance1$ | stats min(value{}) as min max(value{}) as max avg(value{}) as avg | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)    
That would be logical NOT index="source*" mobilePhoneNumber countryCode NOT firstName NOT lastName | stats count by matchField
Is this what you are looking for? ListValues ListValues.ListValue._value ListValues.ListValue.displayName ListValues.ListValue.id Record.contentId Record.levelGuid Record.levelId Record.mo... See more...
Is this what you are looking for? ListValues ListValues.ListValue._value ListValues.ListValue.displayName ListValues.ListValue.id Record.contentId Record.levelGuid Record.levelId Record.moduleId Record.parentId Reference._value Reference.id Users.User._value Users.User.firstName Users.User.id Users.User.lastName Users.User.middleName guid id p table.border table.cellpadding table.cellspacing table.style table.tbody.tr.style table.tbody.tr.td.p table.tbody.tr.td.style tbody.tr.td.valign type value xmlConvertedValue ... ...                                                           needs 22981 as field name Critical 74550 429636 3c5b481a-6698-49f5-8111-e43bb7604486 57 83 0               8f8df6e3-8fb8-478b-8aa0-0be02bec24e3 22981                   4             429636 3c5b481a-6698-49f5-8111-e43bb7604486 57 83 0 needs 22876 as field name 256681           4cc725ad-d78d-4fc0-a3b2-c2805da8f29a 22876                   9             429636 3c5b481a-6698-49f5-8111-e43bb7604486 57 83 0 needs 23445 as field name 255761           f4f262f7-290a-4ffc-af2b-dcccde673dba 23445                   9             429636 3c5b481a-6698-49f5-8111-e43bb7604486 57 83 0               ea8f9a24-3d35-49f9-b74e-e3b9e48f8b3b 1675                   2             429636 3c5b481a-6698-49f5-8111-e43bb7604486 57 83 0               e563eb9e-6390-406a-ac79-386e1c3006a3 22812                   2 needs 22812 as field name           429636 3c5b481a-6698-49f5-8111-e43bb7604486 57 83 0     needs 22863 as field name data 5117 data data a9fe7505-5877-4bdf-aa28-9f6c86af90ae 22863                   8             429636 3c5b481a-6698-49f5-8111-e43bb7604486 57 83 0               4466fd31-3ab3-4117-8aa0-40f765d20c10 22784                   3 7/18/2023 2023-07-18T00:00:00Z ... ...                                                         Just repeat the same array flatten method       | fields - Record.Field{}.* | spath path=Record.Field{} | mvexpand Record.Field{} | spath input=Record.Field{} | mvexpand p{} | mvexpand table.tbody.tr.td.p{} | eval p = coalesce(p, 'p{}'), table.tbody.tr.td.p = coalesce(table.tbody.tr.td.p, 'table.tbody.tr.td.p{}') | fields - Record.Field{} p{} table.tbody.tr.td.p{}       Here's an emulation you can play with and compare with your data       | makeresults | eval _raw = "{ \"Record\": { \"contentId\": \"429636\", \"levelId\": \"57\", \"levelGuid\": \"3c5b481a-6698-49f5-8111-e43bb7604486\", \"moduleId\": \"83\", \"parentId\": \"0\", \"Field\": [ { \"id\": \"22811\", \"guid\": \"6c6bbe96-deab-46ab-b83b-461364a204e0\", \"type\": \"1\", \"_value\": \"Need This with 22811 as the field name\" }, { \"id\": \"22810\", \"guid\": \"08f66941-8f2f-42ce-87ae-7bec95bb5d3b\", \"type\": \"1\", \"p\": \"need this with 22810 as the field name\" }, { \"id\": \"478\", \"guid\": \"4e17baea-f624-4d1a-9c8c-83dd18448689\", \"type\": \"1\", \"p\": [ \"Needs to have 478 as field name\", \"Needs to have 478 as field name\" ] }, { \"id\": \"22859\", \"guid\": \"f45d3578-100e-44aa-b3d3-1526aa080742\", \"type\": \"3\", \"xmlConvertedValue\": \"2023-06-16T00:00:00Z\", \"_value\": \"needs 22859 as field name\" }, { \"id\": \"482\", \"guid\": \"a7ae0730-508b-4545-8cdc-fb68fc2e985a\", \"type\": \"3\", \"xmlConvertedValue\": \"2023-08-22T00:00:00Z\", \"_value\": \"needs 482 as field name\" }, { \"id\": \"22791\", \"guid\": \"89fb3582-c325-4bc9-812e-0d25e319bc52\", \"type\": \"4\", \"ListValues\": { \"ListValue\": { \"id\": \"74192\", \"displayName\": \"Exception Closed\", \"_value\": \"needs 22791 as field name\" } } }, { \"id\": \"22818\", \"guid\": \"e2388e72-cace-42e6-9364-4f936df1b7f4\", \"type\": \"4\", \"ListValues\": { \"ListValue\": { \"id\": \"74414\", \"displayName\": \"Yes\", \"_value\": \"needs 22818 as field name\" } } }, { \"id\": \"22981\", \"guid\": \"8f8df6e3-8fb8-478b-8aa0-0be02bec24e3\", \"type\": \"4\", \"ListValues\": { \"ListValue\": { \"id\": \"74550\", \"displayName\": \"Critical\", \"_value\": \"needs 22981 as field name\" } } }, { \"id\": \"22876\", \"guid\": \"4cc725ad-d78d-4fc0-a3b2-c2805da8f29a\", \"type\": \"9\", \"Reference\": { \"id\": \"256681\", \"_value\": \"needs 22876 as field name\" } }, { \"id\": \"23445\", \"guid\": \"f4f262f7-290a-4ffc-af2b-dcccde673dba\", \"type\": \"9\", \"Reference\": { \"id\": \"255761\", \"_value\": \"needs 23445 as field name\" } }, { \"id\": \"1675\", \"guid\": \"ea8f9a24-3d35-49f9-b74e-e3b9e48f8b3b\", \"type\": \"2\" }, { \"id\": \"22812\", \"guid\": \"e563eb9e-6390-406a-ac79-386e1c3006a3\", \"type\": \"2\", \"_value\": \"needs 22812 as field name\" }, { \"id\": \"22863\", \"guid\": \"a9fe7505-5877-4bdf-aa28-9f6c86af90ae\", \"type\": \"8\", \"Users\": { \"User\": { \"id\": \"5117\", \"firstName\": \"data\", \"middleName\": \"data\", \"lastName\": \"data\", \"_value\": \"needs 22863 as field name\" } } }, { \"id\": \"22784\", \"guid\": \"4466fd31-3ab3-4117-8aa0-40f765d20c10\", \"type\": \"3\", \"xmlConvertedValue\": \"2023-07-18T00:00:00Z\", \"_value\": \"7/18/2023\" }, { \"id\": \"22786\", \"guid\": \"d1c7af3e-a350-4e59-9353-132a04a73641\", \"type\": \"1\" }, { \"id\": \"2808\", \"guid\": \"4392ae76-9ee1-45bf-ac31-9e323a518622\", \"type\": \"1\", \"p\": \"needs 2808 as field name\" }, { \"id\": \"22802\", \"guid\": \"ad7d4268-e386-441d-90b1-2da2fba0d002\", \"type\": \"1\", \"table\": { \"style\": \"width: 954px\", \"border\": \"1\", \"cellspacing\": \"0\", \"cellpadding\": \"0\", \"tbody\": { \"tr\": { \"style\": \"height: 73.05pt\", \"td\": { \"style\": \"width: 715.5pt\", \"valign\": \"top\", \"p\": \"needs 22802 as field name\" } } } } }, { \"id\": \"8031\", \"guid\": \"fbcfdf2c-2990-41d1-9139-8a1d255688b0\", \"type\": \"1\", \"table\": { \"style\": \"width: 954px\", \"border\": \"1\", \"cellspacing\": \"0\", \"cellpadding\": \"0\", \"tbody\": { \"tr\": { \"style\": \"height: 71.1pt\", \"td\": { \"style\": \"width: 715.5pt\", \"valign\": \"top\", \"p\": [ \"needs 8031 as field name\", \"needs 8031 as field name\" ] } } } } }, { \"id\": \"22820\", \"guid\": \"0f98830d-48b3-497c-b965-55be276037f2\", \"type\": \"1\", \"p\": \"needs 22820 as field name\" }, { \"id\": \"22807\", \"guid\": \"8aa0d0fa-632d-4dfa-9867-b0cc407fa96b\", \"type\": \"3\" }, { \"id\": \"22855\", \"guid\": \"e55cbc59-ad8d-4831-8e6f-d350046026e9\", \"type\": \"1\" }, { \"id\": \"8032\", \"guid\": \"f916365b-e6eb-4ab9-a4ff-c7812a404854\", \"type\": \"1\", \"p\": \"needs 8032 as field name\" }, { \"id\": \"22792\", \"guid\": \"8e70c28a-2eec-4e38-b78b-5495c2854b3e\", \"type\": \"1\", \"_value\": \"needs 22792 as field name \" }, { \"id\": 22793, \"guid\": \"ffeaa385-643a-4f04-8a00-c28ddd026b7f\", \"type\": \"4\", \"ListValues\": \"\" }, { \"id\": \"22795\", \"guid\": \"c46eac60-d86e-4af4-9292-d194a601f8b6\", \"type\": \"1\" }, { \"id\": \"22797\", \"guid\": \"8cd6e398-e565-4034-8db8-2e2ecb2f0b31\", \"type\": \"4\", \"ListValues\": { \"ListValue\": { \"id\": \"73060\", \"displayName\": \"data\", \"_value\": \"needs 22797 as field name\" } } }, { \"id\": \"22799\", \"guid\": \"20823b18-cb9b-47a3-854d-58f874164b27\", \"type\": \"4\", \"ListValues\": { \"ListValue\": { \"id\": \"74410\", \"displayName\": \"Other\", \"_value\": \"needs 22799 as field name\" } } }, { \"id\": \"22798\", \"guid\": \"5b32be4c-bc40-45b3-add4-1b22162fd882\", \"type\": \"4\", \"ListValues\": { \"ListValue\": { \"id\": \"74405\", \"displayName\": \"N/A\", \"_value\": \"needs 22798 as field name\" } } }, { \"id\": \"22800\", \"guid\": \"6b020db0-780f-4eaf-8381-c122425b71ed\", \"type\": \"1\", \"p\": \"needs 22800 as field name\" }, { \"id\": \"22801\", \"guid\": \"06334da8-5392-4a9d-a3eb-d4075ee30787\", \"type\": \"1\", \"p\": \"needs 22801 as field name\" }, { \"id\": \"22794\", \"guid\": \"25da1de8-8e81-4281-8ef3-d82d1dc005ad\", \"type\": \"4\", \"ListValues\": { \"ListValue\": { \"id\": \"74398\", \"displayName\": \"Yes\", \"_value\": \"needs 22794 as field name\" } } }, { \"id\": \"22813\", \"guid\": \"89760b4f-49be-40ad-8429-89c247e3e95a\", \"type\": \"1\", \"p\": \"needs 22813 as field name\" }, { \"id\": \"22803\", \"guid\": \"03b6c826-e15c-4356-89e8-b0bd509aaeb5\", \"type\": \"3\", \"xmlConvertedValue\": \"2023-06-15T00:00:00Z\", \"_value\": \"needs 22803 as field name\" }, { \"id\": \"22804\", \"guid\": \"d7683f9c-97bb-461a-97df-36ec6596b4fc\", \"type\": \"1\", \"p\": \"needs 22804 as field name\" }, { \"id\": \"22805\", \"guid\": \"33386a3a-c331-4d8c-9825-166c0a5235c2\", \"type\": \"3\", \"xmlConvertedValue\": \"2023-06-15T00:00:00Z\", \"_value\": \"needs 22805 as field name\" }, { \"id\": \"22806\", \"guid\": \"cd486293-9857-475c-9da3-a06f836edb59\", \"type\": \"1\", \"p\": \"needs 22806 as field name\" } ] } }" | spath ``` emulation above ```            
What do you mean by too long? Too many lines or performance? You can combine eval statements, i.e. | eval Rank_Inc=round((pos-1)/(count-1)*100, 0), Rank_Exc=round((pos+1)/(count+1)*100, 0)  From a... See more...
What do you mean by too long? Too many lines or performance? You can combine eval statements, i.e. | eval Rank_Inc=round((pos-1)/(count-1)*100, 0), Rank_Exc=round((pos+1)/(count+1)*100, 0)  From a performance point of view, as soon as you sort data, it will be running on the search head and both eventstats and streamstats must run on the search head, but your data set should be pretty small at this point regardless of how many students you have, you only need Student and Score on the search head, so if it's performance, make sure you do a fields statement to limit the fields before the sort.
Hello I tried your suggestion and it worked. I accepted this solution and will try on real data. 1)  Can you explain what this eval for? It looks like if range is 0, you replace the pos with NULL ... See more...
Hello I tried your suggestion and it worked. I accepted this solution and will try on real data. 1)  Can you explain what this eval for? It looks like if range is 0, you replace the pos with NULL and fill down with previous value, except for position one? | eval Rank=if(Rank=1 OR range != 0, Rank, null())    2) Would it be possible to use only 1 streamstats instead of 2 streamstats?   Thank you so much for your help
Hello, I tried your alternative solution, it worked fine, but if there are 2 similar scores, it doesn't give the same rank for the 2 students as in your first solution. Your alternative solutions... See more...
Hello, I tried your alternative solution, it worked fine, but if there are 2 similar scores, it doesn't give the same rank for the 2 students as in your first solution. Your alternative solutions + data | makeresults format=csv data="Student, Score a,100 b,95 c,84 d,73 e,73 f,54 g,43 h,37 i,22 j,12" | sort Score | streamstats count as pos | eventstats count | eval Rank_Inc=round((pos-1)/(count-1)*100, 0) | eval Rank_Exc=round((pos+1)/(count+1)*100, 0) | table Student, Score, count, pos, Rank_Exc, Rank_Inc | sort - Score I will need to add the following searches by ITWhisperer to give the same rank | streamstats window=2 range(Score) as range | eval pos=if(pos=1 OR range != 0, pos, null()) | filldown pos The final searches are too long, which contains 2 streamstats and 1 eventstats. Do you know if there's a way to shorten this?    I appreciate your assistance.  Thanks Final solution | makeresults format=csv data="Student, Score a,100 b,95 c,84 d,73 e,73 f,54 g,43 h,37 i,22 j,12" | sort Score | streamstats count as pos | eventstats count | streamstats window=2 range(Score) as range | eval pos=if(pos=1 OR range != 0, pos, null()) | filldown pos | eval Rank_Inc=round((pos-1)/(count-1)*100, 0) | eval Rank_Exc=round((pos+1)/(count+1)*100, 0) | table Student, Score, count, pos, Rank_Exc, Rank_Inc | sort - Score  
Do a search in this community and you will find many many examples of the same question being answered.  
Hi, I'd like to ask about the version of Splunk TA "Palo Alto Networks App for Splunk" (Splunk_TA_paloalto). Our Palotlo machines will be replaced and the PAN OS change from 9.1 to 10.2.4. What is... See more...
Hi, I'd like to ask about the version of Splunk TA "Palo Alto Networks App for Splunk" (Splunk_TA_paloalto). Our Palotlo machines will be replaced and the PAN OS change from 9.1 to 10.2.4. What is the appropriate version of TA for PAN OS 10.2.4? Our "Splunk_TA_paloalto" is now 7.1.0.   Thanks  in advance.
I am currently integrating Splunk SOAR with Forcepoint Web Security. I am testing out the connectivity but getting error that SSL:UNSUPPORTED PROTOCOL. Forcepoint currently support up till TLS 1.1 ... See more...
I am currently integrating Splunk SOAR with Forcepoint Web Security. I am testing out the connectivity but getting error that SSL:UNSUPPORTED PROTOCOL. Forcepoint currently support up till TLS 1.1 anyway I can set/modify for SOAR/forcepoint to utilize 1.1 in the meantime instead of 1.2?
Hi all, I have facing  an issue where exactly we can troubleshoot when a Host Stops Sending cmd Logs to Splunk.   Thanks 
@LearningGuy as I said in the other post - you can probably solve that problem  and as usual, @ITWhisperer comes up with the perfect elegant solution!
Great, thanks a lot @bowesmana ,..much appreciated ! 
To get p which takes either the single or mv field, use | eval p_values=coalesce(p, 'p{}') I don't understand the ListValues.Listvalue part - these are not an array, so only single value fields in ... See more...
To get p which takes either the single or mv field, use | eval p_values=coalesce(p, 'p{}') I don't understand the ListValues.Listvalue part - these are not an array, so only single value fields in your example. Can you give an example of what in your example data should get mapped in the _value and p cases to the top array?  
As to why the user is a $ sign, that would come from how the user field is being extracted from your data. Much will depend on the data format you're using XML or otherwise and the TA you have insta... See more...
As to why the user is a $ sign, that would come from how the user field is being extracted from your data. Much will depend on the data format you're using XML or otherwise and the TA you have installed to extract Windows event log data. If you run this search in Verbose mode index=WinEventLog* EventID=4625 earliest=-d@d latest=@d | head 1 You will see the raw data and fields extracted for one event and on the left hand side you will see the extracted fields. If there is only a $ sign, then that's probably because the real user is not in the data - or it's not being extracted correctly. Look at this regarding the event log https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4625 As for the first/last login time do this index=WinEventLog* EventID=4625 earliest=-d@d latest=@d | stats min(_time) as FirtEvent max(_time) as LastEvent count by user, _time, action, subject, message Look at this list of aggrgation functions you can use to get information in the stats command https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Stats#Stats_function_options  
@inventsekar You can use whatever two variables you like, a/b, k/v, key/value In the foreach using a var name with _ prefix means that it will not be generally visible as a field, so in case you for... See more...
@inventsekar You can use whatever two variables you like, a/b, k/v, key/value In the foreach using a var name with _ prefix means that it will not be generally visible as a field, so in case you forget to remove the field _key, it will not be seen as part of the data. I often use that just to make sure temporary fields are hidden and don't become part of the working dataset. The syntax {_key}=mvindex(value,<<FIELD>>) uses Splunk's encoding to create a new field (left hand side) that has the name of the VALUE of _key and it takes the n'th multivalue element from the value MV based on <<FIELD>> which is effectively a loop of the values of the foreach statement 0 1 2 3 4... It's the same as doing this | makeresults | fields - _time | eval key="NAME", value="ANTONY" | eval {key}=value where you will end up with a new field called NAME with the value of ANTONY There should really be cleanup to remove the temporary field names k,v,_k, so a | fields statement would be a good idea at the end.