All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi So my organization uses Splunk Enterprise and I have just started learning. So I just needed to ask a question that I need to add aorund 4000+ Servers in the Splunk Enterprise so that my team can... See more...
Hi So my organization uses Splunk Enterprise and I have just started learning. So I just needed to ask a question that I need to add aorund 4000+ Servers in the Splunk Enterprise so that my team can view some crucial metrics and data along with reports such as Reboot, CPU/Memory Usage, Drive Alert and all the other crucial data in a single frame. So is it technically possible and if yes how. They are all in different regions and they are in different environments such as Production, Corporate, Stage, Development, etc,. Anyone can reach out to me at smit.agasti10@gmail.com . It would be great if someone could help and be mindful I am a total rookie .
We have many assets with non-compliant names, which I have to fix. I need help with that, because I dont have so much experience and I am not sure how to find correct names.
Hi All, below are the sample logs: can i get props for this sample logs.   ------------------------------------------------------------- Time: 02/12/2021 01:45:05.777 Message: there is a exce... See more...
Hi All, below are the sample logs: can i get props for this sample logs.   ------------------------------------------------------------- Time: 02/12/2021 01:45:05.777 Message: there is a exception error code gg456hhhrgh34567 type: application code data: system ------------------------------------------------------------- ------------------------------------------------------------- Time: 24/12/2021 01:45:05.777 Message: there is a exception error code 897fghj56879hgj type: application code jobs data: system jobs -------------------------------------------------------------       ------------------------------------------------------------- Time: 28/12/2021 02:54:15.767 Message: there is a exception error code 89hjyt5643edhjjy656 type: application code error data: system error ------------------------------------------------------------- -------------------------------------- Timeline: 12/02/2021 12:44:32.667 Message Details - Application code contains error at 12/02/2021 11:30:00.212 -------------------------------------- -------------------------------------- Timeline: 23/02/2021 10:23:22.124 Message Details - Application code contains error at 12/02/2021 08:20:10.100 -------------------------------------- -------------------------------------- Timeline: 24/02/2021 10:20:12.667 Message Details - Application code contains error at 24/02/2021 07:10:23.112 --------------------------------------    
Hi I have a table like this : What I want to do is when I click on the "Test Case" value of a particular row, it should expand that row ( if possible only that particular cell) and display a ... See more...
Hi I have a table like this : What I want to do is when I click on the "Test Case" value of a particular row, it should expand that row ( if possible only that particular cell) and display a table like this: Also I am using token (when clicking on the Test Case) to pass value to the second table. Any help would be appreciated 
HI  So I have this dashboard showing the below.  HBSS      ACAS        CMRSACAS    CMRSHBSS 89              92               84                          77 MY question is how do I get the dash... See more...
HI  So I have this dashboard showing the below.  HBSS      ACAS        CMRSACAS    CMRSHBSS 89              92               84                          77 MY question is how do I get the dashboard to show  ONLY the highest count for the day. Since the dashboard are updated daily? Any help will be fantastic. Thanks
Hello, I am trying to measure the downtime or slo breaches of certain customer endpoints over period of time, for example the metrics are success rate, latency are something we measure for the endpoi... See more...
Hello, I am trying to measure the downtime or slo breaches of certain customer endpoints over period of time, for example the metrics are success rate, latency are something we measure for the endpoints, currently we capture and query splunk every 5 mins and get these values, the values when below < 97% for success rate are breaches , one issue we have with in that 5 mins the slo breaches could have lasted few secs and mins and not the entire 5 minutes, if we capture the data from splunk for every min data for success rate that will be too many queries hits to the splunk and storing 1440 values/day instead of 288 values/day when queried every 1 min + storage cost for storing data and parsing to compute slo breaches 1440 mins/5 mins = 288 values 1440 mins/ 1 mins = 1440 values Any ideas how we can query splunk and get the threshold breaches accurately to secs so we can report downtime for prod incidents accurately to what is the amount of time the customer impact lasted with less hits to splunk and also more real time data provided to business on impact ?  
looking for the best way to audit all users accessing REST endpoints found a way to list users, but any way to limit this based on REST calls? | rest /services/authentication/users splunk_server=*
Hello, collectd is the mechanism to obtain information about network traffic (octets per second). The search to create a visualization of the data in a dashboard is below.    | mstats rate_av... See more...
Hello, collectd is the mechanism to obtain information about network traffic (octets per second). The search to create a visualization of the data in a dashboard is below.    | mstats rate_avg("octets.*") WHERE index="network" chart=true host="device-*" span=5m by host | fields - _span* | rename "rate_avg(octets.rx): *" AS "in * bit/s" | rename "rate_avg(octets.tx): *" AS "out * bit/s" | foreach * [eval <<FIELD>>='<<FIELD>>' * 8 ]   The issue I am facing is when trying to graph time frames wider than a few months. There are to many data points and the results are truncated. I have played with charting.chart.resultTruncationLimit but that only gets so far. Note: the span of 5m cannot be changed or the data is skewed. Is there a way to create graphs maintaining the time span but per day or per month? For example, Display a graph of the last 30 days but summarize per day or per week Display a graph of the last year but summarize per month or per week. Thanks in advance. 
Is it possible to set the token value from the other dashboard? For example, can I link from one dashboard to other dashboard like   http://{base-URL?test-toke="value"
I have a stand-alone SH with 3 peer(non-clustered) indexers. I tried adding a 4th non-cluster indexer as a peer. 2 days later /opt/splunk was 100% full. Anyone have this happen? Is the data new data ... See more...
I have a stand-alone SH with 3 peer(non-clustered) indexers. I tried adding a 4th non-cluster indexer as a peer. 2 days later /opt/splunk was 100% full. Anyone have this happen? Is the data new data or old data that was copied to that indexer? I had to remove that indexer from the peer but now I don’t know what the data is on that 4th indexer. Help. New to Splunk obviously. 
Hi Everyone, I was reading through this article that led me to believe it`s possible to display external web content in Splunk, however it doesn`t appear to be working for me. Interestingly, it wor... See more...
Hi Everyone, I was reading through this article that led me to believe it`s possible to display external web content in Splunk, however it doesn`t appear to be working for me. Interestingly, it works fine outside of Splunk (ie: if I save the source as an HTML file locally on my computer), but it doesn`t display the iframe if I put it in a Splunk dashboard. Any assistance would be greatly appreciated. Source code below. <?xml version='1.0' encoding='utf-8'?> <dashboard version="1.1"> <label>My iFrame Dashboard</label> <row> <html> <h2>Embedded Web Page!</h2> <iframe src="https://myValidDomainName" width="100%" height="300">&gt;</iframe> </html> </row> </dashboard>  
I am looking for a Splunk query that will pull the enabled and disabled ciphers from windows servers in my environment.  Ranging from OS 2012R2 - 2019.  As a bonus if someone has one for Oracle Linux... See more...
I am looking for a Splunk query that will pull the enabled and disabled ciphers from windows servers in my environment.  Ranging from OS 2012R2 - 2019.  As a bonus if someone has one for Oracle Linux I would like that too.
Hi, I have a query that is making two different searches and displaying the stats of each. Example: index="example" TERM(STOP)     | rename message.payload as message1     | stats count by me... See more...
Hi, I have a query that is making two different searches and displaying the stats of each. Example: index="example" TERM(STOP)     | rename message.payload as message1     | stats count by message1     | appendcols [search index="example2"         | rename message.payload as message2         | stats count by message2]   I want the results of message1 and message2 whose event timestamps are identical to be displayed next to each other in statistics tab. I would like to have the stats displayed as such: Message1         Message2        Count <data>                 <data>               23 <data>                 <data>               17   Is this possible?  I hope this makes sense, I am still somewhat new to writing Splunk queries and this is so far the most complex one I have needed to write.
Hello, I have some log messages like this, where various info is delimited by double-colons: {"@message":"[\"ERROR :: xService :: xService :: function :: user :: 6c548f2b-4c3c-4aab-8fde-c1a8d727af3... See more...
Hello, I have some log messages like this, where various info is delimited by double-colons: {"@message":"[\"ERROR :: xService :: xService :: function :: user :: 6c548f2b-4c3c-4aab-8fde-c1a8d727af35 :: device1,device2 :: shared :: groupname :: tcp\"]","@timestamp":"2023-03-20T23:34:05.886Z","@fields":{"@origin":"/dir/stuff/things/and/more/goes/here/file.js:2109","@level":"info"}} I am trying to send a count per day of the 'function' shown above, and the issue is that it might appear at various block count when the message is split from ' :: ' - So, I am trying to match regex on the UUID and count 2 blocks backwards from there to get the 'function' as a reliable way to extract it.  I have observed the UUID appearing in blocks 5, 6, and 7, so this is an attempt at case for each and assigning a value to get the function. Am quite new to Splunk queries, but here is my stab at it.  Of course, it doesn't quite work: index=iap source="/dir/stuff/things/xService.log" "ERROR :: xService ::" | rex field=@message mode=sed "s/(\[\"|\"\])//g" | eval tmp = split('@message'," :: ") , check7 = mvindex(tmp,7), check6 = mvindex(tmp,6), check5 = mvindex(tmp,5) | eval target=case(match(check7,"\w+\-\w+\-\w+\-\w+\-\w+"),7,match(check6,"\w+\-\w+\-\w+\-\w+\-\w+"),6,match(check5,"\w+\-\w+\-\w+\-\w+\-\w+"),5) | eval function=case(match(target == 7, 5, target == 6, 6, target == 5, 5) | timechart span=1d count by function limit=0
We are populating Splunk using an HEC connection with a source type of _json, set to the default character set of UTF-8.  However, a field shown in the raw data as: "Character test: 0242 (\\u00f2): ... See more...
We are populating Splunk using an HEC connection with a source type of _json, set to the default character set of UTF-8.  However, a field shown in the raw data as: "Character test: 0242 (\\u00f2): >\uC3B2<" is displayed as: Character test: 0242 (\u00f2): >쎲< I would have expected the display to show the character, ò, which is the UTF-8 equivalent of hexadecimal C3B2, rather than the displayed UNICODE character
We have a standard configuration for our workstations. Several of the fields are static but some are dynamic (but these have a fixed length). I want to use a lookup table of all the values and apply... See more...
We have a standard configuration for our workstations. Several of the fields are static but some are dynamic (but these have a fixed length). I want to use a lookup table of all the values and apply automatically to a sourcetype. But I'm not sure how I would go about matching the fields/values with a Lookup Definition. The standard is  1=Device Type - Static 1 char 2=Building Code - Static 3 chars 3=Department Code - Static 3 chars 4=Function - Static 1 char 5=Asset Tag - Dynamic 7 chars   So a machine may be named LBL1HRSSABC1234 indicating it's a laptop in Building 1 in HR Services that is Shared with an asset tag of ABC1234. How could I use a lookup with these 4 static and 1 dynamic values to populate said values when a search is done on a particular host name. I should mention that I'm confortable creating the lookup and applying it, just not how to get it to match on the criteria above. Thanks in advance!  
I have a log set from FW's. These logs have a field called "src."  From what I can tell, this field is populated with values such as: FQDN (myhost.mydomain.com) Console or telnet 10.0.0.1 I'm l... See more...
I have a log set from FW's. These logs have a field called "src."  From what I can tell, this field is populated with values such as: FQDN (myhost.mydomain.com) Console or telnet 10.0.0.1 I'm looking to have two fields created from the "src" field, one name IP if the value in "src" is an IP and "src_nt_host" if the value is not an ip_address.  A small sample from the logged event: From: Console or telnet. From: myhost.mydomain.com. From: 10.0.0.1. Any help / guidance is greatly appreciated.
Hi Everyone,   I recently observed the splunk internal logs and found that there is a field component and found two values for component field - 1.TailingProcessor 2.Watched file INFO Watch... See more...
Hi Everyone,   I recently observed the splunk internal logs and found that there is a field component and found two values for component field - 1.TailingProcessor 2.Watched file INFO WatchedFile [3338437 tailreader0] - Will use tracking rule=modtime for file='/path/.conf INFO TailingProcessor [3338433 MainTailingThread] - Adding watch on path: /path Please help me understand what these logs says about. Thanks     
Hello, I have an issue regarding the creation of a new index. I want to create a new index to receive logs form NPS servers.  First, I created a new app for NPS in the deployment-apps in the ma... See more...
Hello, I have an issue regarding the creation of a new index. I want to create a new index to receive logs form NPS servers.  First, I created a new app for NPS in the deployment-apps in the master server but I created the new folder for the app and edited everytihng with a root user. I assigned the app to a new server class in the master server. I installed the UF on NPS servers and they are successfully connected to the deployment server. I added the, the new server class. Now I am getting the error that index= radius (defined in the new app created) dos not exist. So, I went to the master-apps in the master server , the previous defined indexes are present in the  directory  X other than _cluster, I added the index radius in the local file of the Directory X . I validated  the conf and push and all the indexes in the indexer cluster have the same config but the index radius is not present. I did all the modification using the root user. Can anyone advise  me to know the issue ? Thank you.    
How to find the memory utilization  on a service level for processes