All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @spy_jr  It isnt possible to encrypt your indexes in their entirety with Splunk itself, but as others have suggested you can use various 3rd party apps which try to encrypt parts of an event. The... See more...
Hi @spy_jr  It isnt possible to encrypt your indexes in their entirety with Splunk itself, but as others have suggested you can use various 3rd party apps which try to encrypt parts of an event. The problem with these is it makes it a nightmare from a resource usage point of view and search performance would be terrible. I really would advise against this. Looking at your usecase of preventing users copying data from your Splunk instance and reading it on another - even if you use approaches like the above, or if there was a way to encrypt Splunk index data using Splunk then the key which would be used to encrypt/decrypt the data would also need to be accessible by Splunk, meaning any attacked who was able to access your data to exfiltrate it could also exfiltrate the keys and thus decrypt the data anyway. If you are looking to protect/encrypt the data at rest (ie at a disk level) then you could use enable disk encryption at the operating system level (e.g., BitLocker, LUKS) to protect all data, including Splunk indexes, but again, this wouldnt protect if a user was able to access the running system.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Aside from the data manipulation, which is the easy part and there are answers here, @PickleRick points out you can't get the data into the DM in this way, so if you have to get already indexed data ... See more...
Aside from the data manipulation, which is the easy part and there are answers here, @PickleRick points out you can't get the data into the DM in this way, so if you have to get already indexed data into the DM, then you will probably need to manipulate the data as shown in the existing answers, but collect the data to a new index and then run the DM off that index rather than the primary index.    
@PrewinThomas you can't double mvexpand a result set where on two MV fields as you end up with 4 events
@PickleRick - off topic from OP's original posting, but in response to the error: That templatized field error I believe is due to how mvappend works - it does not like null() but null is ok, i.e. t... See more...
@PickleRick - off topic from OP's original posting, but in response to the error: That templatized field error I believe is due to how mvappend works - it does not like null() but null is ok, i.e. this is fine | foreach * [ | eval a=mvappend(a, if("<<FIELD>>"="location", null, json_object("location",'location',"name","<<FIELD>>","value",'<<FIELD>>'))) ] and null() is fine if you take mvappend out of the equation, i.e. [ | eval a=if("<<FIELD>>"="location", null(), json_object("location",'location',"name","<<FIELD>>","value",'<<FIELD>>')) ] See that this fails | makeresults | eval a="a" | eval a=mvappend(a, null()) | eval c=mvcount(a) but null on its own works.  
My approach is for a situation where an attacker infiltrates my Splunk server and starts stealing data. I would like that stolen data to not be opened in another Splunk and viewed unless they have th... See more...
My approach is for a situation where an attacker infiltrates my Splunk server and starts stealing data. I would like that stolen data to not be opened in another Splunk and viewed unless they have the encryption key.
@lukasmecir  I think Analyst Queue GUI changes to columns are session-based only, not writing to the disk. I recommend submitting a feature request to Splunk Support. Regards, Prewin Splunk Enthu... See more...
@lukasmecir  I think Analyst Queue GUI changes to columns are session-based only, not writing to the disk. I recommend submitting a feature request to Splunk Support. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@thierry  Is your field structure is fixed and small? Then we can make it simpler like below, else better to go with foreach. | <your_search> | eval name=mvappend("temperature", "humidity") | eval ... See more...
@thierry  Is your field structure is fixed and small? Then we can make it simpler like below, else better to go with foreach. | <your_search> | eval name=mvappend("temperature", "humidity") | eval value=mvappend(temperature, humidity) | fields location name value | mvexpand name | mvexpand value | eval value=tonumber(value) | table location name value Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@spy_jr  I dont think Splunk natively support index-level encryption with key-based access control that would prevent someone from copying raw index data. You can encrypt/mask sensitive fields. #h... See more...
@spy_jr  I dont think Splunk natively support index-level encryption with key-based access control that would prevent someone from copying raw index data. You can encrypt/mask sensitive fields. #https://www.splunk.com/en_us/blog/tips-and-tricks/encrypting-and-decrypting-fields.html?locale=en_us Also you can have a look at this App(I havent tested personally)#https://splunkbase.splunk.com/app/282 Alternatively, you can consider encrypted filesystems or using external encryption tools. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@pedropiin  Make a hidden search and set your tokens. And use the token inside your panel. <row depends="$alwaysHide$"> <panel> <search id="token_generator"> <query> | make... See more...
@pedropiin  Make a hidden search and set your tokens. And use the token inside your panel. <row depends="$alwaysHide$"> <panel> <search id="token_generator"> <query> | makeresults | eval tokenA="value1", tokenB="value2", tokenC="value3" | eval dynamic_link="/app/search/dashboard_name?fieldA=".tokenA."&fieldB=".tokenB."&fieldC=".tokenC </query> <done> <set token="myDynamicLink">$result.dynamic_link$</set> </done> </search> </panel> </row> <panel> <html> <ul> <li><a href="$myDynamicLink$">Dynamic Dashboard</a></li> </ul> </html> </panel> Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
You can use a hidden table like @bowesmana suggests.  Or you can use a bare <search/> element without panel, like this: <form version="1.1"> <label>https://community.splunk.com/t5/forums/replypage... See more...
You can use a hidden table like @bowesmana suggests.  Or you can use a bare <search/> element without panel, like this: <form version="1.1"> <label>https://community.splunk.com/t5/forums/replypage/board-id/splunk-search/message-id/242210</label> <search> <query> | makeresults | eval Link4=https://$token_1$/".$token_2$."/".$token_3" </query> <done> <set token="Link4">$result.Link4$</set> </done> </search> ... <row> <panel id="panel1"> <title>My Panel</title> <html> <style> ... </style> </html> <html> <li><a href="..." target="..."><b>Link1</b></a></li> <li><a href="..." target="..."><b>Link2</b></a></li> <li><a href="..." target="..."><b>Link3</b></a></li> <li><a href="$Link4$" target="..."><b>$Link4</b></a></li> </html> </panel> </row> </form>  
Hi @sureshmani04  1) do you want to change the sourcetype of the already indexed data  or  2) new data that is yet to be onboarded    for the case 1, the answer is no. once the data is ingested,... See more...
Hi @sureshmani04  1) do you want to change the sourcetype of the already indexed data  or  2) new data that is yet to be onboarded    for the case 1, the answer is no. once the data is ingested, we can not alter / modify anything to the indexed data.  for the case 2, you can refer the previous reply. pls note that, most of the times you need not change/modify the sourcetype of an app/add-on, unless you have some specific requirements, thanks. 
Hi @spy_jr  Actually, you can grant access to this particular index only to the required user ids. in that way you can easily control who can see or search or do anything with the index.    Pls ch... See more...
Hi @spy_jr  Actually, you can grant access to this particular index only to the required user ids. in that way you can easily control who can see or search or do anything with the index.    Pls check some discussions here -  https://community.splunk.com/t5/Splunk-Search/Is-there-a-way-to-encrypt-sensitive-data-in-index-time-and/m-p/640324  
I have a group of indexes, one of which contains sensitive data that must be encrypted so that no one can copy and upload the data to another Splunk instance unless they have the key to decrypt it. I... See more...
I have a group of indexes, one of which contains sensitive data that must be encrypted so that no one can copy and upload the data to another Splunk instance unless they have the key to decrypt it. Is this possible with Splunk?
I don't know what you mean when you say you can't create eval blocks inside a panel, you can do pretty much whatever you like inside a panel, e.g. you can do this <panel id="panel1"> <title>My Pan... See more...
I don't know what you mean when you say you can't create eval blocks inside a panel, you can do pretty much whatever you like inside a panel, e.g. you can do this <panel id="panel1"> <title>My Panel</title> <html> <style> ... </style> </html> <html> <li><a href="..." target="..."><b>Link1</b></a></li> <li><a href="..." target="..."><b>Link2</b></a></li> <li><a href="..." target="..."><b>Link3</b></a></li> <li><a href="..." target="..."><b>$Link4$</b></a></li> </html> <table depends="$hidden_table$"> <search> <query> | makeresults | eval Link4=https://$token_1$/".$token_2$."/".$token_3" </query> <done> <set token="Link4">$result.Link4$</set> </done> </search> </table> </panel> so you have a hidden table inside the panel that performs any search you want and creates any kind of value you need, then the done clause creates the new Link4 token from your result and that $Link4$ token is then used in the <html> section of the panel.  
You got it! You are right in that you would use OR to search dataset1 OR dataset2 OR datasetn... and then the use of rex/eval then plays with the data as it passes through the pipeline. When the pi... See more...
You got it! You are right in that you would use OR to search dataset1 OR dataset2 OR datasetn... and then the use of rex/eval then plays with the data as it passes through the pipeline. When the pipeline sees an event that matches your first dataset, the first rex will extract an event id to the field event_id1 and when it sees an event matching the second rex statement it will extract a field called event_id2 to that event. At that point you will have n event, some with event_id1 and some with event_id2 So, then to create that common field which you can use stats on, the coalesce statement simply says that - I am going to create a new field called event_id which will get its value from whichever of the two fields event_id1 and event_id2 is not null. If it sees the first event type, then event_id becomes event_id1 and for the second event, event_id2. At that point, every event will now have a new field called event_id. And you perfectly picked up on the use of the fields statement to retain the fields you wanted through the stats. The stats is simply counting the number of times it sees each event id, and your guess is correct in that when count=1, it must be because there is only one event type for that event_id - if you know in your case that event_id1 will always exist then you have your answer.  
When changing the sourcetype, please note that any knowledge objects (field extractions, calculated fields, etc) in the app that apply to the previous sourcetype will then no longer apply, unless you... See more...
When changing the sourcetype, please note that any knowledge objects (field extractions, calculated fields, etc) in the app that apply to the previous sourcetype will then no longer apply, unless you then modify them to apply to the new sourcetype. It is likely possible to configure the app using the webUI to make the /local/inputs.conf stanzas, which could then be edited to use a different sourcetype.  Another option would be to use transforms to change the sourcetype: You can put these config files in the local directory of the app (E.g. /opt/splunk/etc/apps/Splunk_TA_MS_Security/local) in the heavy forwarder where you installed the app, or append their contents to existing files of the same name in the local directory. props.conf # e.g. if you want to change ms365:defender:incident to "ms:new:sourcetype:value". Add more stanzas for each sourcetype to change. [ms365:defender:incident] TRANSFORMS-ChangeSourceType = ChangeSourceType   transforms.conf [ChangeSourceType] #custom regex can be set here to apply to matching events REGEX = .* FORMAT = sourcetype::"ms:new:sourcetype:value"   Ref: https://docs.splunk.com/Documentation/Splunk/latest/Data/Advancedsourcetypeoverrides
I am looking for change the source type for this apps Splunk Add-on for Microsoft Security
In the playbook, there is a "End" block created automatically along with the "Start" block. If you click on the "End" block, you can set outputs for the playbook which can be used by other playbooks ... See more...
In the playbook, there is a "End" block created automatically along with the "Start" block. If you click on the "End" block, you can set outputs for the playbook which can be used by other playbooks that include the playbook using a playbook block.
That's very strange, as cacheEntriesLimit should disable the cache, thus forcing all clients to re-load static assets. Can you confirm that the setting is used, using btool?
While you can go even more generic -  | foreach * [ | eval a=mvappend(a,if("<<FIELD>>"=="EventID",null(),json_object("location",location,"name","<<FIELD>>","value",'<<FIELD>>'))) ] | fields... See more...
While you can go even more generic -  | foreach * [ | eval a=mvappend(a,if("<<FIELD>>"=="EventID",null(),json_object("location",location,"name","<<FIELD>>","value",'<<FIELD>>'))) ] | fields - _raw | mvexpand a | fields a | spath input=a | fields - a (Works but throws some exception about templatized search for a field; would have to investigate it deeper). But it won't do in context of the datamodel. Datamodel constraints must be a single non-piped search.