All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all,   in classic Splunk xml dashboards it was very easy to create conditional dashboards that for example hide, when a toke has a specific value. Is there any option to to this in Dashboard S... See more...
Hi all,   in classic Splunk xml dashboards it was very easy to create conditional dashboards that for example hide, when a toke has a specific value. Is there any option to to this in Dashboard Studio?
Hi, Here's my query - | mstats max(_value) avg(_value) min(_value) prestats=true WHERE metric_name="cpu.system" AND"index"="osnixperf" AND [| inputlookup Unix.csv] BY host span=1h | stats Avg(_... See more...
Hi, Here's my query - | mstats max(_value) avg(_value) min(_value) prestats=true WHERE metric_name="cpu.system" AND"index"="osnixperf" AND [| inputlookup Unix.csv] BY host span=1h | stats Avg(_value) AS Avg1 BY host | join [| mstats max(_value) avg(_value) min(_value) prestats=true WHERE metric_name="cpu.user" AND"index"="osnixperf" AND [| inputlookup Unix.csv] BY host span=1h | stats Avg(_value) AS Avg2 BY host] | eval totalavg=Avg1+Avg2,totalavg=round(totalavg,2) I need the timechart that shows with totalavg value like below image.   
Hi, I am trying to ingest long JSON files into my Splunk index, where a record could contain more than 10000 characters. To prevent long records from getting truncated, I added a "TRUNCATE=0" into ... See more...
Hi, I am trying to ingest long JSON files into my Splunk index, where a record could contain more than 10000 characters. To prevent long records from getting truncated, I added a "TRUNCATE=0" into my props.conf, and the entire record was ingested into the index. All events are forwarded and stored in the index, but I'm having problems with fields that appear towards the end of the JSON records.  I'm currently testing with 2 files: File A has 382 records, of which 166 are long records.  File B has 252 records, of which all are long records.  All 634 events are returned with a simple search of the index, and I can see all fields in each event, regardless of how long the event is. However, not all fields are extracted and directly searchable. For example, one of the fields is called "name", and it appears towards the end of each JSON record. On the "Interesting fields" pane, under "name", it shows only a count of 216 events from File A, and none of the remaining 166 + 252 long events in Files A and B. This is the same for other fields that appear towards the end of each JSON record, but fields towards the beginning of the record show all 634 events. If I negate the 216 events, then these fields do not appear on the Fields pane at all. Also, while I'm not able to directly search for "name=<name in File B>", I can still select the field from the event and "add to search", and all 252 events would be returned. I'm not sure why these fields are not properly extracted even though they did not appear to be truncated. How can I extract them properly? Thank you.
Anyone recently used splunk add-on for tripwire? Their website do not have the correct installation file.
hi All, I am not getting any user feedback data in the teams addon. I have checked microsoft graph api it has an endpoint specifically for that , but still addon is populating data as null. Thought... See more...
hi All, I am not getting any user feedback data in the teams addon. I have checked microsoft graph api it has an endpoint specifically for that , but still addon is populating data as null. Thoughts ideas will be helpful. @Jason Cogner @Skyler Taylor @Robert Sisson  
I want to add the in_usage and out_usage value from the below table. for example, I want to add in_usage with out_usage and result should be as total. Likewise for other values. can someone give ide... See more...
I want to add the in_usage and out_usage value from the below table. for example, I want to add in_usage with out_usage and result should be as total. Likewise for other values. can someone give ideas for this. _time source status Avg metric_name 11/3/2021 5:02 Interface_Summary_Out out_usage 16.01833333 GigabitEthernet0/1 11/3/2021 5:00 Interface_Summary_In in_usage 5.555 GigabitEthernet0/1 11/3/2021 4:02 Interface_Summary_Out out_usage 17.085 GigabitEthernet0/1 11/3/2021 4:00 Interface_Summary_In in_usage 5.270833333 GigabitEthernet0/1 11/3/2021 3:02 Interface_Summary_Out out_usage 17.425 GigabitEthernet0/1 11/3/2021 3:00 Interface_Summary_In in_usage 5.48 GigabitEthernet0/1   Please refer the attached screenshot for you reference
Hi all. I have a report set up in Splunk producing a visualisation we're embedding on our website. A member of the public has asked if they can instead get the raw data in JSON format. I don't want... See more...
Hi all. I have a report set up in Splunk producing a visualisation we're embedding on our website. A member of the public has asked if they can instead get the raw data in JSON format. I don't want to create them a user in the system, and I'd really rather link them through our Azure API portal, where we have our other API end points for retrieving customer data. So, what I'm really wanting to do is work out how I can get the scheduled report data out of Splunk and into the Azure API. I'm aware this is not a pure Splunk data, but TBH, I thought people here would be most likely to have the most experience, especially as I seem to need to 2-step the REST queries to find the search IDs and then get the results and....  It all ended up being quite a lot more complicated than I expected it to be. I couldn't find any relevant how-to guides online either, so thought this 
I am running a query that gives me various percentile metric in different row, and I would like to format them in an easily readable table.  For example, here is the current outcome after I run the b... See more...
I am running a query that gives me various percentile metric in different row, and I would like to format them in an easily readable table.  For example, here is the current outcome after I run the below query index=my_indexer | stats p50(startuptime) as "startuptime_p50", p90(startuptime) as "startuptime_p90", p99(startuptime) as "startuptime_p99", p50(render_time) as "render_time_p50", p90(render_time) as "render_time_p90", p99(render_time) as "render_time_p99", p50(foobar_time) as "foobar_time_p50", p90(foobar_time) as "foobar_time_p90", p99(foobar_time) as "foobar_time_p99", | transpose column                             row1 startuptime_p50         50 startuptime_p70         70 startuptime_p90         90 render_time_p50         51 render_time_p70         72 render_time_p90         93 foobar_time_p50         53 foobar_time_p70         74 foobar_time_p90         95 I would like to format the final table as follow (the column header is optional) Marker                P50         P70         P90 startup                50            70            90 render                 51            72            93 foobar                 53            74            95 thank you very much for your help
Hello, Im getting several APPCRASH Event ID 1001. Do we have a solution ? Below the entire error message : Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem ... See more...
Hello, Im getting several APPCRASH Event ID 1001. Do we have a solution ? Below the entire error message : Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: splunk-perfmon.exe P2: 2048.1280.24325.31539 P3: 5f057cc6 P4: splunk-perfmon.exe P5: 2048.1280.24325.31539 P6: 5f057cc6 P7: c0000005 P8: 00000000009b5003 P9: P10: Attached files: \\?\C:\ProgramData\Microsoft\Windows\WER\Temp\WERBA61.tmp.WERInternalMetadata.xml These files may be available here: \\?\C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_splunk-perfmon.e_b6ce8fa2681eaf73929792a742cf0cc962c88_d85046b8_bc677b10 Analysis symbol: Rechecking for solution: 0 Report Id: 0e443e45-f399-4d76-a355-2c805ed3a192 Report Status: 131172 Hashed bucket: Cab Guid: 0
Hello All,  This may seem easy, but its been quite tedious. How can I create one field that has common values from two separate strings: Example:  Field 1=123_yyy  Field 2=777_x_123_0 Desir... See more...
Hello All,  This may seem easy, but its been quite tedious. How can I create one field that has common values from two separate strings: Example:  Field 1=123_yyy  Field 2=777_x_123_0 Desired Results= New Field = 123  I have tried the below, but it only gives me false --- I know they dont match - I just want what is matching - any suggestions anyone?  | eval matched=if(like(Field1,"%".Field2."%"),"True","False")  
Sample JSON     { message: { application: hello deploy: { X: { A: { QPY: 14814 } } Y: { A: { BWQ: 10967 ... See more...
Sample JSON     { message: { application: hello deploy: { X: { A: { QPY: 14814 } } Y: { A: { BWQ: 10967 MQP: 1106 } } } ABC: 4020 DEF: 1532 } severity: info }     I'm trying to extract key names and values under message.deploy.Y.A (key names are not static) Goal is to put them in a line chart and track values over time. tried foreach but don't know how to use eval. Can someone help please     | foreach message.deploy.Y.A.*    
Can DB Connect execute an Oracle anonymous PL/SQL block, with input parameters, and get the output parameters (for indexing)? An anonymous PL/SQL block - and not a stored procedure. The later must... See more...
Can DB Connect execute an Oracle anonymous PL/SQL block, with input parameters, and get the output parameters (for indexing)? An anonymous PL/SQL block - and not a stored procedure. The later must first be created on the Oracle server. The anonymous PL/SQL block exists on the client side - Splunk - only. Is this possible with Splunk DB Connect and Oracle jdbc ?   best regards Altin
Hello, I would like to reach out for some help in creating a custom sourcetype (cloned from _json), I'm calling it "ibcapacity".  I've tried to edit the settings under this new sourcetype but my res... See more...
Hello, I would like to reach out for some help in creating a custom sourcetype (cloned from _json), I'm calling it "ibcapacity".  I've tried to edit the settings under this new sourcetype but my results are even more broken. The output of the file is formatted correctly in _json (the jq checks come back all good); but when using the _json default sourcetype, the Splunk event gets cut off at 349 lines (the entire file is 392 lines); and the other problem using the standard _json format is that its not fully "color coding" the KVs...but that could be due to the fact that the end brackets aren't in the Splunk event because it was cut off at 349 lines. So my solution was to try to create a custom sourcetype (cloned from _json), I'm calling it "ibcapacity".  I've tried to edit the settings under this new sourcetype but my results are even more broken. Here is the event when searched in the standard _json sourcetype: This is where the Splunk event gets cut off. However, the rest of the file has this at the end (past line 349), which doesn't show up in the Splunk event: ], "percent_used": 120, "role": "Grid Master", "total_objects": 529020 } ] Can this community please help to identify what the correct settings should be for my custom sourcetype, ibcapacity?  Why is the Splunk log getting cut off at 349 lines when using sourcetype=_json? Thank you.
Hello,  Wanted to ask the community for some help on getting server build specifications for a stand alone search head/forwarder. We want to keep this search head/forwarder dedicated for M365 traffi... See more...
Hello,  Wanted to ask the community for some help on getting server build specifications for a stand alone search head/forwarder. We want to keep this search head/forwarder dedicated for M365 traffic. Any suggestions or documentation would be very helpful. Thank you. 
good afternoon I'm currently running the trial Enterprise version on a workstation which (appears to) installed OK, i've installed a forwarder on a server, again which appears to have installed OK, ... See more...
good afternoon I'm currently running the trial Enterprise version on a workstation which (appears to) installed OK, i've installed a forwarder on a server, again which appears to have installed OK, however I can't seem to get the two to talk to each other (I can't add the forwarder in the Enterprise UI as it can't find it) I've searched for the problem on google etc but have only come across 1 post that relates to what I am experiencing - the solution didn't work lol. Would anyone happen to have any ideas or point me in the right direction please. TIA
Hi! What's the best strategy if I want my AWS Lambda logs get ingested directly to SplunkCloud? I don't want my Lambda to log into CW first before ingesting to SplunkCloud to avoid paying both. 
[root@uf5 opt]# cd /opt/splunkforwarder/bin [root@uf5 bin]# ./splunk start --accept-license bash: ./splunk: cannot execute binary file [root@uf5 bin]# bash path/to/mynewshell.sh bash: path/to/myn... See more...
[root@uf5 opt]# cd /opt/splunkforwarder/bin [root@uf5 bin]# ./splunk start --accept-license bash: ./splunk: cannot execute binary file [root@uf5 bin]# bash path/to/mynewshell.sh bash: path/to/mynewshell.sh: No such file or directory how can i fix that showing  bash: ./splunk: cannot execute binary file. please help me
To manage our knowledge objects on our search heads I created a blank app on the deployer under /opt/splunk/etc/shcluster/apps/ and deployed it to the search heads. That part worked well. I can now c... See more...
To manage our knowledge objects on our search heads I created a blank app on the deployer under /opt/splunk/etc/shcluster/apps/ and deployed it to the search heads. That part worked well. I can now create knowledge objects in that app and they sync across the search heads. That works well too. However, I am realizing that the app on the deployer is empty and I'm wondering if that could come back to bite me some day somehow. Should I instead be creating the knowledge objects in an app on the deployer in /opt/splunk/etc/apps/, and then copy the changes to /opt/splunk/etc/shcluster/apps/, and finally deploy it to the search heads? That way I can still use the UI to create the knowledge obejects but won't have to worry about anything or is my worry overrated? I'm curious what is the best way to manage this that others have found. Should I just not worry about it and let the apps be different between the deployer and the search heads or should I have some process for configuring it on the deployer, copying it to the deployment apps, and then pushing out? Thanks.
I am trying to create a search query that pulls tenable (critical, and high) scan results that provides an output of the Tenable (ACAS) critical and high scan results with the following information: ... See more...
I am trying to create a search query that pulls tenable (critical, and high) scan results that provides an output of the Tenable (ACAS) critical and high scan results with the following information: 1.) IP Address (unique identifier for ACAS mapping to Xacta) 2.) DNS Name 3.) System Name  4.) MAC address  5.) OS Type 6.) OS Version 7.) OS Build 8.) System Manufacture 9.) System Serial Number 10.) System Model 11.) AWS Account Number (new field to capture in the standard) 12.) AWS Instance ID # 13.) AWS ENI 
We are trying to figure out if it is possible to get info from internal log files the  start time and time spent on dashboards (viewing) per user for our monthly report .  Any ideas if it is possi... See more...
We are trying to figure out if it is possible to get info from internal log files the  start time and time spent on dashboards (viewing) per user for our monthly report .  Any ideas if it is possible?  We are running  Splunk Cloud 8.2.2106  , upgrading to 8.2.2109 soon