All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

my app contains the index.conf which declares the index that is installed on the heavy forwarder and it is not installed on the indexer. The problem is that data does not land on the indexer ... See more...
my app contains the index.conf which declares the index that is installed on the heavy forwarder and it is not installed on the indexer. The problem is that data does not land on the indexer      
Thanks for looking into it, however, it did not go through, it still gives an error The argument '(eval(action IN (Not Found,Forbidden)))' is invalid
Hello @richgalloway,  Thanks for the information, I will try to do that ! Regards, GaetanVP
Hi, I have query | makeresults | eval _raw="{\"name\": \"my name\", \"values\": [{\"rank\": 1, \"value\": \"\"}, {\"rank\": 2, \"value\": \"a\"}, {\"rank\": 3, \"value\": \"b\"}, {\"rank\": 4, \... See more...
Hi, I have query | makeresults | eval _raw="{\"name\": \"my name\", \"values\": [{\"rank\": 1, \"value\": \"\"}, {\"rank\": 2, \"value\": \"a\"}, {\"rank\": 3, \"value\": \"b\"}, {\"rank\": 4, \"value\": \"c\"}]}" | spath | rename values{}.rank as rank | rename values{}.value as value | table name, rank, value Producing result Because in the first item of values, value is empty the values in the table are shifted one up and are not aligned with the rank.  How could I conditionally update the value to, say, [empty] if that is empty string in the data?
You are almost there. | stats count (eval(action IN ("Not Found","Forbidden"))) as failures by src | where failures>100 | table src
The feature you are looking for is trellis.  But Splunk doesn't currently do trellis for table visualization. (I'm almost sure that Grafana does.)  You can sort of hack something yourself if you are ... See more...
The feature you are looking for is trellis.  But Splunk doesn't currently do trellis for table visualization. (I'm almost sure that Grafana does.)  You can sort of hack something yourself if you are willing to get into the nitty-gritty Simple XML programming. (Or in Dashboard Studio source.)  Oh, you also need to know all possible values of SectionName in advance.  The basic idea is Run a <query /> to populate an aggregate token with values of Attribute in the same search window, e.g.,   index = websphere_cct (Object= "HJn5server1" Env="Prod") OR (Object = "HJn7server3" Env="UAT") SectionName="Process Definition" | spath path=Attributes | eval Attributes = mvappend("SectionName", json_array_to_mv(json_keys(Attributes)))   (Note this only runs in Splunk 8.0 and later.) Use <condition><progress /></condition> to set or unset a dedicated token for every possible SectionName value.  If the value exist in that aggregate token, set the token, otherwise unset it. Use these dedicated tokens to hide or show tables, one for each possible SectionName value. You can read about hide-and-show in Access tokens to show or hide user interface components, about set dynamic tokens in Search tokens for dynamic display example. Here is a mock dashboard you can play with. (I included comments about the actual search that you can substitute.)  Alas! The code is too long.  You can download/copy from here: Mock table trellis in Splunk Simple XML.   Here is a screenshot: As you can see, from your illustrated attribute list of 9, my mock search pretends to have found 4.  So, only those 4 corresponding trellis show on the left-hand side.  If you edit the attribute selection (in source), different tables will show.  In edit mode, all 9 tables are visible, with hidden ones in grey. (Right-hand side is the big table you illustrated, with all 9 attributes.) Several notes: You could have saved volunteers lots of time (and done yourself a favor) by illustrating sample data that matches your desired output.  The JSON in the description has too little in common with the table you show. As your search restricts SectionName to "Process Definition", it doesn't seem to make sense to list SectionName in the table. (SectionName is not an Attribute, any way.)  But I still included it in my emulation. Maintenance is painful and not very scalable like a true trellis feature. Hope this helps.
Hi!  Some good news We've found a better way to handle this. You can use globally defined function to translate your strings: i18n._("string_to_translate") can be converted to _("string_to_tran... See more...
Hi!  Some good news We've found a better way to handle this. You can use globally defined function to translate your strings: i18n._("string_to_translate") can be converted to _("string_to_translate"). What's even better is that this works in both 8.x and 9.x. Just be careful if you (like us) import underscore as _ in your scripts as well as this can cause name clash. In that case simple renaming of imported library will suffice.  
Hello Splunker, I'm trying to  join two fields values in stats command using Eval , looks like I'm doing it wrong, Please help me with the correct syntax.     | stats count (eval(action="Not ... See more...
Hello Splunker, I'm trying to  join two fields values in stats command using Eval , looks like I'm doing it wrong, Please help me with the correct syntax.     | stats count (eval(action="Not Found",action="Forbidden")) as failures by src | where failures>100 | table src     Basically I'm trying call "Not Found" and "Forbidden" as Failures that happened from a single source and then make a count of both these fields.    A Help  here is appreciated,   Thanks, Moh
Hello I have a table with 3 columns :    Domain Environemnt %of deployments hello qa 12 hello1 dr 13 hello2 prod 13 hello3 dev 15   And I would like to achieve this..... See more...
Hello I have a table with 3 columns :    Domain Environemnt %of deployments hello qa 12 hello1 dr 13 hello2 prod 13 hello3 dev 15   And I would like to achieve this.. Domain qa dr dev prod hello 12 13 14 15 hello1 1 2 3 4 hello2 3 2 4 6 hello3 1 3 5 7 While the numbers in the cells are the % of deployments for each environment and domain I've tried this but it is not working as expected : | eventstats count by SERVERS | eventstats dc(SERVERS) as Total_Servers by Domain Environment | eventstats dc(SERVERS) as Total_Servers | eval "% Of deployments by domain&env" = (Deployed_Servers/Total_Servers)*100 | search Sprint!=*required AND Sprint!=*deleted* AND Sprint!="?" | eval SH_{Environment}=count | eventstats values(SH_*) as * by Domain | dedup Environment Domain | table Domain SH* "% Of deployments by domain&env" | rename SH_dr as DR SH_production as Production SH_qa as QA SH_staging as Staging
Hi All  I have a csv file in splunk that i am searching on i am looking to get the total monthly figure spend on each account number and then the total spend per month (of all account number)  ... See more...
Hi All  I have a csv file in splunk that i am searching on i am looking to get the total monthly figure spend on each account number and then the total spend per month (of all account number)  Here my query i've created  ..... main search  | rename Order_Date as Month | stats count by Account_Number, Total_Sum, Month | eventstats sum(Total_Sum) as Monthly_Total by Month | sort - Total_Sum | sort Account_Number | stats list(Account_Number) as Account_Number, list(Total_Sum) as Total_Sum, values(Monthly_Total) as Monthly_Total by Month it looks good but then i noticed that i haven't added each Account_Number up per month (see table below as 6210 appears 3 times.  Can anyone help?  Month Account_Number Total_Sum Monthly_Total April 5751 5756 6201 6203 6205 6210 6210 6210 6340 1986 23423.42 139 17003.09 7107.98 1395.12 50 18 8.05 51130.66   Also i'd like to make the figures in Euros I've tried below but it wont work - any ideas  | fieldformat Total="€".tostring(Total, "commas") finally Splunk has put the month April at the top, i've tried to use the eval command to move January to the top using below code but this hasn't helped - any ideas?  | eval M=case(Month=="April","04", Month=="January","01", Month=="February", "02", Month=="March","03", Month=="May", "05", Month=="June", "06", Month=="July","07", Month=="August","08", Month=="September", "09") | sort M | fields - M   Many thanks P   
This is an informational post rather than a question. If you use WEF to gather logs from your infrastructure to a single point from which you pick them up with [WinEventLog://ForwardedEvents] You ... See more...
This is an informational post rather than a question. If you use WEF to gather logs from your infrastructure to a single point from which you pick them up with [WinEventLog://ForwardedEvents] You might notice that this input can stop working after you upgrade to 9.1.0 (or above). The forwarder will log to splunkd.log errors about wrong event format Invalid WEC content-format:'Events', for splunk-format = rendered_eventSee the description for the 'wec_event_format' setting at $SPLUNK_HOME/etc/system/README/inputs.conf.spec for more details  If you go to the inputs.conf spec file (either in the readme directory or on the Splunk website) you'll find the wec_event_format parameter (which was not present in versions up to 9.0.6) which must correspond with the setting in the WEF subscription settings. If the wec_event_format is "wrong" (the most typical situation will be when the WEF subscription is created as Events and the UF uses the default rendered_event value) , you need to set wec_event_format = raw_event in your input definition.
Hi Ryan, Thanks for supporting. Yes I checked with the team and they told it is not possible to hide the other Agents. But I assume there should be some option or at least user/role level configurat... See more...
Hi Ryan, Thanks for supporting. Yes I checked with the team and they told it is not possible to hide the other Agents. But I assume there should be some option or at least user/role level configuration to show only the Agents we are required. So we will see Metrics, Policies only relevant for our Agents.   Thanks. 
Hi, After some days the Splunk server stop receiving input.  The forwarders are not changed, but I did some changes on splunk server (can't remember what I did).  Also know that the firewall does no... See more...
Hi, After some days the Splunk server stop receiving input.  The forwarders are not changed, but I did some changes on splunk server (can't remember what I did).  Also know that the firewall does not cause of the problem. On server Splunk server we have also configured Splunk Uniiversal forwarder. So same server include both Splunk Enterprise + Splunk Universal forwarder.    Not sure, but I think it's some trouble with indexer since they cannot receive inputs. Have also  verified that environment variables is ok.  Also changed file permission on all filres/directories below Splunk_HOME.   So it should be fine   On Splunk Universal clients (on clients),  splunkd.log says that TcpOutProc is connected to Splunk Server. It also says that the Splunk server LISTEN to *:9997.  > ss -tnlup tcp LISTEN 0 128 *:9997  *:* users(("splunkd",pid=170257,fd=41))   Assume telemytry data is sent to Splunkserver, but they are not indexed. One more information:    On Splunk server: Settings - Data - Indexes  I can see that  _audit SplunkLighForwarder $SPLUNK_DB/audit/db status says  disabled _internal SplunkLighForwarder $SPLUNK_DB/_internal/db status says  disabled _introspection SplunkLighForwarder $SPLUNK_DB/_introspection/db status says  disabled _telemetry  SplunkLighForwarder SPLUNK_DB/_telemetry/db status says  disabled history SplunkLighForwarder SPLUNK_DB/history/db status says  disabled main  SplunkLighForwarder PLUNK_DB/history /default/db status says  disabled   Assume it has something to do with wrong settings on Splunk server.  Hope soemone out there can give me some usefull tips/hints. So we can use splunk again as normal.    Rgds Geir J. H
Thank you this worked  
Hi @joxers25, for my knowledge the migration path is for Splunk Enterprise and not for Universal Forwarder so it shouldn't be required an intermediate step. But anyway Anyway, both versions 8.2.x ... See more...
Hi @joxers25, for my knowledge the migration path is for Splunk Enterprise and not for Universal Forwarder so it shouldn't be required an intermediate step. But anyway Anyway, both versions 8.2.x and 9.1.1 are certified for Windows 2012/R2, as you can read at https://www.splunk.com/en_us/download/previous-releases-universal-forwarder.html Ciao. Giuseppe  
Hello everyone, I have a need to increase the compute capacity of an HF running in AWS (it is only forwarding, not indexing).    Splunk PS recommended putting a 16CPU machine into service. I'm no... See more...
Hello everyone, I have a need to increase the compute capacity of an HF running in AWS (it is only forwarding, not indexing).    Splunk PS recommended putting a 16CPU machine into service. I'm not sure if the vCPU count shown by the AWS instance-type page  reflects the number of cores that will be exploited by Splunk, or the number of threads that are available.  Basically, I don't know if I want a 16 vCPU machine (c6i.4xl) or a 16 physical core machine (c6i.16xlarge) to get Splunk using the recommended 16CPUs. Does anyone have a quick answer?  Google wasn't my friend here! Mike
I've received this warning :"If you do nothing, the app will fail in future Splunk upgrades that use jQuery 3.5" . Does anyone use this add-on on Splunk Enterprise 9.X?
Hi Folks,  Has anyone upgraded 7.2.x UF on Windows 2012 R2 to 9.0.x? Documentation says to upgrade 7.2 to 8.0 or 8.1 then 9.x However,  Windows 2012 R2 not supported on 8.0.x or 8.1.x Can'... See more...
Hi Folks,  Has anyone upgraded 7.2.x UF on Windows 2012 R2 to 9.0.x? Documentation says to upgrade 7.2 to 8.0 or 8.1 then 9.x However,  Windows 2012 R2 not supported on 8.0.x or 8.1.x Can't find any documentation or posts related to this so asking here: Is it ok to upgrade 7.2.x to 8.2.x then onto 9.0.x for Windows 2012 R2?
thank you, its working!
strptime() will convert strings to epoch times | eval _time=strptime(time,"%a, %d %b %Y %H:%M:%S %Z")