All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, From what I read so far, Splunk forms can be used to fetch/filter data based on User's requirement. The data in the case is already present in Splunk. However, I wish to insert data into specific... See more...
Hi, From what I read so far, Splunk forms can be used to fetch/filter data based on User's requirement. The data in the case is already present in Splunk. However, I wish to insert data into specific Index in Splunk. Can this also be done using Splunk Forms? 
Thank you for the feedback. The long answer to @PickleRick should cover most if my reply, just to clarify. There is no disc tier, only SSD all the way at a flat rate.  This is why I was curious if t... See more...
Thank you for the feedback. The long answer to @PickleRick should cover most if my reply, just to clarify. There is no disc tier, only SSD all the way at a flat rate.  This is why I was curious if there was any point in keeping any or just a minimum amount of cold storage available. Retention time control seems to be the best argument so far
That is probably the strongest argument right now, as there is only SSD storage at a flat rate. So if i configure nothing and just set a path to "storage" data [volume:storage] path = /data/splunk/... See more...
That is probably the strongest argument right now, as there is only SSD storage at a flat rate. So if i configure nothing and just set a path to "storage" data [volume:storage] path = /data/splunk/warm/ # adjust when correct disk is mounted maxVolumeDataSizeMB = 2800000 Then use "storage" for both warm and cold , I assume that there is a "default" balance beteween home (hot/warm) and cold with regards to space allocated. So I then have no control over the sizes of home and cold. However, I can set the forezenTimePeriodInSecs and delete based on time from cold with higher accuracy. I assume I could just add another definition for the same volume mounted and divide the space between hot and cold  [volume:varm] path = /data/splunk/warm/ # adjust when correct disk is mounted maxVolumeDataSizeMB = 1000000 [volume:cold] path = /data/splunk/warm/ # adjust when correct disk is mounted maxVolumeDataSizeMB = 1800000 Should produce similar results as just keeping the defaults (as indicated by other replies). The question remaining then is, should I just separate volumes into warm and cold so that I can more easily expand cold storage when needed. As well as some possible efficiency gains. Thank you for your feedback!
Or even that : | makeresults | eval json="{\"TIMESTAMP\": 1742677200,\"SYSINFO\": \"{\\\"number_of_notconnect_interfaces\\\":0,\\\"hostname\\\":\\\"test\\\",\\\"number_of_transceivers\\\":{\\\"10G-L... See more...
Or even that : | makeresults | eval json="{\"TIMESTAMP\": 1742677200,\"SYSINFO\": \"{\\\"number_of_notconnect_interfaces\\\":0,\\\"hostname\\\":\\\"test\\\",\\\"number_of_transceivers\\\":{\\\"10G-LR\\\":10,\\\"100G-CWDM4\\\":20},\\\"number_of_bfd_peers\\\":10,\\\"number_of_bgp_peers\\\":10,\\\"number_of_disabled_interfaces\\\":10,\\\"number_of_subinterfaces\\\":{\\\"Ethernet1\\\":10,\\\"Ethernet2\\\":20},\\\"number_of_up_interfaces\\\":1}\"}" |fromjson json |fromjson SYSINFO |fields number_of_subinterfaces |fromjson number_of_subinterfaces |fields - number_of_subinterfaces _time |transpose header_field=column |rename "row 1" as value, column as keys  
@sureshkumaar  Ensure that the REGEX in route_fortigate_traffic is correctly matching the events.  Verify that the source path in props.conf is correct and matches the actual log file paths. Wher... See more...
@sureshkumaar  Ensure that the REGEX in route_fortigate_traffic is correctly matching the events.  Verify that the source path in props.conf is correct and matches the actual log file paths. Where do you have props.conf and trasnforms.conf?   Route and filter data - Splunk Documentation   Refer this community link https://community.splunk.com/t5/Splunk-Cloud-Platform/Routing-log-data-to-different-indexes-based-on-the-source/m-p/701980    https://community.splunk.com/t5/Getting-Data-In/Route-index-data-based-on-source/m-p/688917 
traffic events not getting routed to nw_fortigate and non-traffic events not getting routed to os_linux Can someone help? props.conf [source::.../TUC-*/OOB/TUC-*(50M)*.log] TRANSFORMS-routing = r... See more...
traffic events not getting routed to nw_fortigate and non-traffic events not getting routed to os_linux Can someone help? props.conf [source::.../TUC-*/OOB/TUC-*(50M)*.log] TRANSFORMS-routing = route_fortigate_traffic, route_nix_messages   transforms.conf [route_fortigate_traffic] REGEX = (?i)traffic|session|firewall|deny|accept DEST_KEY = _MetaData:Index FORMAT = nw_fortigate [route_nix_messages] REGEX = .* DEST_KEY = _MetaData:Index FORMAT = os_linux
Hi @isoutamo @PickleRick , Yes, you are correct, thats solution will be impact to security, i run this for testing purpose on Dev, just to see how splunk works with custom script sudo. My implement... See more...
Hi @isoutamo @PickleRick , Yes, you are correct, thats solution will be impact to security, i run this for testing purpose on Dev, just to see how splunk works with custom script sudo. My implementation is read log file based on crontab.
@kiwiglen  As has been said by @PickleRick , there is no recursion here, so this example handles up to 5 levels of subtask. | makeresults format=csv data="USER,JOBNAME,TRAN,TRANNUM,PHAPPLID,PHTRAN,... See more...
@kiwiglen  As has been said by @PickleRick , there is no recursion here, so this example handles up to 5 levels of subtask. | makeresults format=csv data="USER,JOBNAME,TRAN,TRANNUM,PHAPPLID,PHTRAN,PHTRANNO,USRCPUT_MICROSEC ,APP3,CSMI,43856,APP7,QZ81,70322,72 ,APP5,CSMI,20634,APP7,QZ81,70322,8860 ,APP7,QZ81,70322,APP3,QZ81,43836,16043 GPDCFC26,APP3,QZ81,43836, , ,0,897 ,APP3,CSMI,41839,APP5,QZ61,15551,51 ,APP3,CSMI,41838,APP5,QZ61,15551,64 ,APP3,CSMI,41837,APP5,QZ61,15551,79 ,APP5,QZ61,15551,APP3,QZ61,41835,5232 GOTLIS12,APP3,QZ61,41835, , ,0,778 ,APP5,QZ61,12,APP3,QZ61,1,5232 GOTLIS12,APP3,QZ61,1, , ,0,778 ,APP5,CSMI,111,APP7,QZ81,110,8860 ,APP7,QZ81,110,APP3,QZ81,100,16043 ABCDEF,APP3,QZ81,100, , ,0,897" | fields USER,JOBNAME,TRAN,TRANNUM,PHAPPLID,PHTRAN,PHTRANNO,USRCPUT_MICROSEC ``` Now initialise level 0 numbers ``` | eval level=if(PHTRANNO=0, 0, null()), root_trannum=if(PHTRANNO=0, TRANNUM, null()) ``` This logic handles 5 levels of "recursion" - as discussed earlier, it's not true recursion as you have to specify the operations for the max level count you need. The logic works by cascading the USER and root TRANNUM down the events for the related subtasks. It performs the following actions - Create a parent id field containing TRANNUM, root TRANNUM USER and level for the specific level wanted - in this case level 0 is PHTRANNO=0 in the IF test - Collect all the values of those ids across all events - Find the PHTRANNO value of the event in the list of parents - Extract the user and root TRANNUM from the result if found ``` ``` Get level 1 ids ``` | eval parent_id=if(PHTRANNO=0, TRANNUM.":".root_trannum.":".USER.":".1, null()) | eventstats values(parent_id) as parents | eval data=split(mvindex(parents, mvfind(parents, "^".PHTRANNO.":")), ":") | eval root_trannum=if(isnotnull(data), mvindex(data, 1), root_trannum), root_user=if(isnotnull(data), mvindex(data, 2), root_user), level=if(isnotnull(data), mvindex(data, 3), level), USER=coalesce(USER, root_user) ``` Get level 2 ids ``` | eval parent_id=if(level=1, TRANNUM.":".root_trannum.":".USER.":".2, null()) | eventstats values(parent_id) as parents | eval data=split(mvindex(parents, mvfind(parents, "^".PHTRANNO.":")), ":") | eval root_trannum=if(isnotnull(data), mvindex(data, 1), root_trannum), root_user=if(isnotnull(data), mvindex(data, 2), root_user), level=if(isnotnull(data), mvindex(data, 3), level), USER=coalesce(USER, root_user) ``` Get level 3 ids ``` | eval parent_id=if(level=2, TRANNUM.":".root_trannum.":".USER.":".3, null()) | eventstats values(parent_id) as parents | eval data=split(mvindex(parents, mvfind(parents, "^".PHTRANNO.":")), ":") | eval root_trannum=if(isnotnull(data), mvindex(data, 1), root_trannum), root_user=if(isnotnull(data), mvindex(data, 2), root_user), level=if(isnotnull(data), mvindex(data, 3), level), USER=coalesce(USER, root_user) ``` Get level 4 ids ``` | eval parent_id=if(level=3, TRANNUM.":".root_trannum.":".USER.":".4, null()) | eventstats values(parent_id) as parents | eval data=split(mvindex(parents, mvfind(parents, "^".PHTRANNO.":")), ":") | eval root_trannum=if(isnotnull(data), mvindex(data, 1), root_trannum), root_user=if(isnotnull(data), mvindex(data, 2), root_user), level=if(isnotnull(data), mvindex(data, 3), level), USER=coalesce(USER, root_user) ``` Get level 5 ids ``` | eval parent_id=if(level=4, TRANNUM.":".root_trannum.":".USER.":".5, null()) | eventstats values(parent_id) as parents | eval data=split(mvindex(parents, mvfind(parents, "^".PHTRANNO.":")), ":") | eval root_trannum=if(isnotnull(data), mvindex(data, 1), root_trannum), root_user=if(isnotnull(data), mvindex(data, 2), root_user), level=if(isnotnull(data), mvindex(data, 3), level), USER=coalesce(USER, root_user) | fields - root_user parents parent_id data ``` This counts all occurrences of the PHTRAN and joins the USER field into the child events ``` | eventstats count(eval(PHTRANNO!=0)) as subTasks by USER root_trannum ``` Now count the executions of each USER and evaluate the timings ``` | stats count(eval(PHTRANNO=0)) as Executions sum(USRCPUT_MICROSEC) as tot_USRCPUT_MICROSEC avg(USRCPUT_MICROSEC) as avg_USRCPUT_MICROSEC sum(eval(if(PHTRANNO=0,subTasks, 0))) as subTasks by USER ``` And adjust the subtask count, as we treated the main task as a subtask and then calculate the average subtask count ``` | eval avg_subTasks=subTasks/Executions Hopefully the comments help explain what's going on - however, without knowing really what you want from averages and totals, you may need to tweak, however, this is an EXPENSIVE search - so if you're dealing with a large data set it may be slow. If you have questions about the implementation just ask. Note, as with Splunk things, there may be a way to improve this, but with no correlation information other than the TRANNUM, it's tricky.
EID-I-2530
Maybe you should create an idea for that in ideas.splunk.com?
@kiran_panchavat, that doesn't work for us, we need role restriction by IP not service or server restriction. Kind Regards, Andre
Thank you @PickleRick , I think that would work for us, we have SAML and limit it to Kerberos only. This should prevent taking your session with you from from one network segment to another (network... See more...
Thank you @PickleRick , I think that would work for us, we have SAML and limit it to Kerberos only. This should prevent taking your session with you from from one network segment to another (network segments are different AD Domains too). With SAML auth, can you still manage the role assignments from Splunk, like AD Group -> role, or does that need to be done on the SAML provider? Kind Regards Andre
This is exactly like @PickleRick said, never do it like this! You lost all security in your system! If/when you need that information then better way is to use e.g. cron and export output to some fi... See more...
This is exactly like @PickleRick said, never do it like this! You lost all security in your system! If/when you need that information then better way is to use e.g. cron and export output to some file which are read by splunk. Just give needed access to that file with setfacl. And down use chmod with 777! 
It's exactly like @PickleRick said. If you get parsed data, the only pipeline which can modify it is rulesets (ingest actions). Event this is not issue in this case, I'm not sure than you @MichaelM1... See more...
It's exactly like @PickleRick said. If you get parsed data, the only pipeline which can modify it is rulesets (ingest actions). Event this is not issue in this case, I'm not sure than you @MichaelM1  remember how splunk manages the order of these  [default] TRANSFORMS-projectid = addprojectid TRANSFORMS-IntermediateForwarder = addIntermediateForwarder TRANSFORMS-GUIDe = addGUIDe When you add several  TRANSFORMS in own lines Splunk always use ASCII order of those names when it selects the execution order for those! For that reason when you want to define the order you have two options:  name those correctly to get wanted order like TRANSFORMS-01-first, TRANSFORMS-02-second etc. Put all those in one line like TRANSFORMS-all = xyzzy, abcd, 1123-last As @livehybrid said you should use INGEST_EVAL to set values only if it didn't exists like he present you.
OMG. Don't do that! This way you're allowing anyone who has permission to run local programs (and I can think of several ways to do so) effectively do anything with your system. This is like saying ... See more...
OMG. Don't do that! This way you're allowing anyone who has permission to run local programs (and I can think of several ways to do so) effectively do anything with your system. This is like saying "Oh, I solved the problem with my front door lock by leaving the door wide open".
Continuoing last reply : Bellow my error during troubleshoot : 1. sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid'  -> Because splunk running as a root user, i have... See more...
Continuoing last reply : Bellow my error during troubleshoot : 1. sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid'  -> Because splunk running as a root user, i have change back splunk to non-root user and see bellow error. 2. sudo: sorry, you must have a tty to run sudo -> Required !requiretty permission on /etc/sudoers   for me Splunk is powerfull tools, since i got this workaround, Ansible task can be done with splunk directly
Hi, i have same case also, but now it's solved, bellow my workaround : 1. Add splunk user to /etc/sudoers splunk-user ALL=(ALL) NOPASSWD: ALL 2. Add !requiretty for splunk user Defaults:splunk-us... See more...
Hi, i have same case also, but now it's solved, bellow my workaround : 1. Add splunk user to /etc/sudoers splunk-user ALL=(ALL) NOPASSWD: ALL 2. Add !requiretty for splunk user Defaults:splunk-user !requiretty   For point no. 2, basically splunk running script on non-interactive environment by defaults, so we need add permission to pass it. Running manual the command in CLI is interactive, thats why we don't need !requiretty
@ITWhisperer @catdadof3  yes want to set $entityTokenFirst$ to * when the user selects "ALL" in the dropdown,  observing that search queries are being executed automatically (auto-run) whenever you ... See more...
@ITWhisperer @catdadof3  yes want to set $entityTokenFirst$ to * when the user selects "ALL" in the dropdown,  observing that search queries are being executed automatically (auto-run) whenever you switch dropdown values or filters, without explicitly hitting the submit button.  looking for an alternative way to achieve this behavior without triggering auto-run searches. HOw to apply multiple condition only when they hit a submit <change> <condition value="ALL"> <set token="entityTokenFirst">*</set> </condition> <condition> <!-- Split the value and set tokens for both parts --> <set token="entityLabel">$label$</set> <eval token="searchName">mvindex(split($value$, ","),1)</eval> <eval token="entityTokenFirst">mvindex(split($value$, ","),0)</eval> </condition> </change> below dashboard is working only when i hit submit button,no condition is being used below <form> <label>stats Clone metrics</label> <fieldset submitButton="true"> <input type="dropdown" token="indexToken1" searchWhenChanged="false"> <label>Environment</label> <choice value="prod,prod">PROD</choice> <choice value="np,test">TEST</choice> <change> <eval token="stageToken">mvindex(split($value$,","),1)</eval> <eval token="indexToken">mvindex(split($value$,","),0)</eval> </change> <default>np,test</default> </input> <input type="dropdown" token="entityToken" searchWhenChanged="false"> <label>Data Entity</label> <choice value="target">Target </choice> <choice value="product">Product</choice> <choice value="*">ALL</choice> </input> <input type="time" token="timeToken" searchWhenChanged="false"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> </form>   
Hi @molla, The geo_countries lookup shipped with Splunk provides boundaries for countries. The tutorial at https://docs.splunk.com/Documentation/Splunk/latest/Viz/GenerateMap provides an example for... See more...
Hi @molla, The geo_countries lookup shipped with Splunk provides boundaries for countries. The tutorial at https://docs.splunk.com/Documentation/Splunk/latest/Viz/GenerateMap provides an example for counties, but you can replace the county references with country references: | makeresults format=csv data="x,country 3,United States 5,United States 4,Canada 1,Canada 1,Mexico 2,Mexico" | stats sum(x) by country | geom geo_countries featureIdField=country The output of geom can be used with choropleth maps in both classic (Simple XML) dashboards and Dashboard Studio. You can use the inputlookup command to see the list of supported countries: | inputlookup geo_countries | table featureId
Thank you very much, ITWhispere. That works.