All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am beginner and i want to create something like this my Splunk search1 is  index=XXX source="/opt/middleware/ibm/" findsachinattendance |timechart count span=60m | stats max(*) AS * my ... See more...
I am beginner and i want to create something like this my Splunk search1 is  index=XXX source="/opt/middleware/ibm/" findsachinattendance |timechart count span=60m | stats max(*) AS * my Splunk search2 is  index=XXX source="/opt/middleware/ibm/" findtendulkarattendance |timechart count span=60m | stats max(*) AS *   I found something but i couldnt relate to work  https://community.splunk.com/t5/Splunk-Search/How-to-create-a-Table-where-each-row-is-the-result-of-a-query/m-p/545512
Hi I have a field called ObjectD which is always different for each events But in this field, there is always à character chain which begins by OU= and DC= Example OU=Admin,  OU=toto, OU=Utilsate... See more...
Hi I have a field called ObjectD which is always different for each events But in this field, there is always à character chain which begins by OU= and DC= Example OU=Admin,  OU=toto, OU=Utilsateur, DC=abc, DC=def I need to filter the events where OU=Admin or OU=Utilisateurs and DC=abc So i am doing this in my search after the stats | where match(ObjectD,"OU=Admin|OU=Utilisateurs),DC=abc") But it returns anything I also need to create a new field with the name of the OU but because the first clause doesnt works the rex command doesnt works too  Here is my rex | rex field=ObjectD "^[^=]+=[^=]+=(?<OU>[^,]+)" Could you help please?
i have two drop down panels  Basically when i select any value in Monitored statistics the Divisor value should change and its working but problem is i can see the value of the previous field a... See more...
i have two drop down panels  Basically when i select any value in Monitored statistics the Divisor value should change and its working but problem is i can see the value of the previous field as well like for "CPU used by" in monitored statistics has the value is in DIVISOR as 100 but it also shows the previous  value1000000 in the divisor of another field how can i just get only one value in DIVISOR 
Hi Team, we are trying to add new field  as a display name into interesting field from below raw event DisplayName: sample-Hostname We tried the below query but it is not working  |... See more...
Hi Team, we are trying to add new field  as a display name into interesting field from below raw event DisplayName: sample-Hostname We tried the below query but it is not working  | rex field=_raw \"DisplayName", "Value":\s(?<DisplayName>\w+). And also please suggest us how to create a query if the user logged in one or more devices. Thanks in advance!
So we have roughly a dozen UF hosts across on-prem and cloud. All are uploading data directly to SplunkCloud. I have had reports from other teams about decent gaps in reporting when they perform sear... See more...
So we have roughly a dozen UF hosts across on-prem and cloud. All are uploading data directly to SplunkCloud. I have had reports from other teams about decent gaps in reporting when they perform searches. For example, performing a query like: index=it_site1_network for the last 2 hours. Currently has two large gaps of 25 minutes each. Now before you ask what the activity level is on this index source, it's very high. There should be a few thousand events every minute. I've checked $SPLUNK_HOME/var/log/splunk/splunkd.log to ensure the files monitored are indeed being monitored. And overall system resource util is very low (cpu, mem, disk, net). My question is, is the metrics.log the only place to look for issues that might affect something like this?
Hi, we have several Universal Forwarders managed by a Deployment Server that occasionally "lose" applications and stop sending logs to Indexers and are no longer connected to the Deployment Server.... See more...
Hi, we have several Universal Forwarders managed by a Deployment Server that occasionally "lose" applications and stop sending logs to Indexers and are no longer connected to the Deployment Server. The only way to reconnect these UFs is to reinstall the connection apps to the DS manually by logging into the host, and then manage it again from the DS. How does this happen? Is there any other way to reconnect these UFs to the DS without necessarily logging in? Thanks, Mauro  
RE: Case #3270697 After upgrade to 9.1.01 not able to send emails eg. of critical alerts! [ ref:_00D409oyL._5005a2bGRKI:ref ] After upgrade to v9.1.0.1 Splunk Enterprise, (single instance), last w... See more...
RE: Case #3270697 After upgrade to 9.1.01 not able to send emails eg. of critical alerts! [ ref:_00D409oyL._5005a2bGRKI:ref ] After upgrade to v9.1.0.1 Splunk Enterprise, (single instance), last weekend (15 Juli 2023) + changing admin password as was suggested by Assist (which throws an error now !?) 1) Message when using sendemail: Smpt setting: O365 Checked the login on O365, ofcourse 2) Assist stopped running???   3) Also: 3a 3b   4) new GUI / layout ?   5) Annoying and not working “Don’t show this again” message on every page. Just stepping to another dashboard on the same server/domain ??  6) endless waiting:   What is next? Anyone else suffering from the same issues?
Hi, I have enabled a email alert and its working fine. I want to add to add a URL link in email body , but its picking as normal text. Is there any way I can add the link to the email body? Thank... See more...
Hi, I have enabled a email alert and its working fine. I want to add to add a URL link in email body , but its picking as normal text. Is there any way I can add the link to the email body? Thanks in advance  
Hi,  I'm having an issue with timestamping on one unstructured sourcetype (others json and access_log are fine).  My deployment looks like UF->HF->Splunk cloud.  For some reason data from the ment... See more...
Hi,  I'm having an issue with timestamping on one unstructured sourcetype (others json and access_log are fine).  My deployment looks like UF->HF->Splunk cloud.  For some reason data from the mentioned sourcetype is delayed by 1 hour. I mean, I have to increase seachrtime to >60m to see the latest data. Below is the output of a query to compare index time and _time.  I tried to change timestamp extraction is sourcetype configuration in the cloud, but it didn't help. I come up with idea to transform INGEST_EVAL expression in a transforms stanza in transforms.conf to update the _time field at ingest time after it has been parsed out from the actual event (+3600s)  #transforms.conf  [time-offset] INGEST_EAVL = _time:=_time+3600 #props.conf  [main_demo] TRANSFORMS=time-offset   I suppose there is no transforms.conf equivalent in Splunk GUI (props.conf can be configured in source type GUI section). Do I need to contact Splunk support to perform this kind of change in cloud indexer?  Or maybe there is any other way to align _time to reflect real time? All help would be appreciated, regards, Szymon      
Hi Team, We have installed dotNetAgentSetup64-23.6.0.10056 in my machine. And we are trying to profiler .NET framework 3.5 application. Early my application working fine without any issue but after ... See more...
Hi Team, We have installed dotNetAgentSetup64-23.6.0.10056 in my machine. And we are trying to profiler .NET framework 3.5 application. Early my application working fine without any issue but after installing appDynamics .NET agent it got some issues. In Event viewer, I got the below message .NET Runtime version 2.0.50727.9171 - Fatal Execution Engine Error (00007FFB20973E86) (80131506) and another error message as, Faulting application name: w3wp.exe, version: 6.2.20348.1, time stamp: 0x405e4c14 Faulting module name: mscorwks.dll, version: 2.0.50727.9171, time stamp: 0x64501630 Exception code: 0xc0000005 Fault offset: 0x0000000000255939 Faulting process id: 0x%9 Faulting application start time: 0x%10 Faulting application path: %11 Faulting module path: %12 Report Id: %13 Faulting package full name: %14 Faulting package-relative application ID: %15 Kindly let me know, Why i am getting this issue after installing Appd .NET agent. And how to profile my .NET framework 3.5 application with APPD? Thankyou.
Hi Team, We have defined the index retention as 420 days but when we are trying to access the logs those are in .csv format not as event-value format. PFA of index details and below indexes.conf ... See more...
Hi Team, We have defined the index retention as 420 days but when we are trying to access the logs those are in .csv format not as event-value format. PFA of index details and below indexes.conf confuguration if that index.   [rt_efb] # 250MB a day / 35 days in warm / 460 days retention / 8 GB max index size homePath = volume:hot/rt_efb/db coldPath = volume:cold/rt_efb/colddb thawedPath = $SPLUNK_DB/rt_efb/thaweddb #set to 5 days, +- 5days padding maxHotSpanSecs = 432000 #set to 2 hot buckets maxHotBuckets = 2 homePath.maxDataSizeMB = 2500 coldPath.maxDataSizeMB = 5500 frozenTimePeriodInSecs = 39744000 maxTotalDataSizeMB = 26000   Can you please suggest us on this?   Regards, Anil  
Hi,   i have a field with the models, like below, and with this info i want to define a new field like brand. i tried different approaches but can't get brand field populated, below a test... See more...
Hi,   i have a field with the models, like below, and with this info i want to define a new field like brand. i tried different approaches but can't get brand field populated, below a test search with different case, none works but where clause works well.   index=core_ct_report_* | where (report_model = "cfgHT802") | eval brand=case(report_model=cfgHT802, grandstream) | eval brand2=case(like(report_model, cfgHT802), grandstream) | eval brand3=case(like(report_model, "cfg%"), grandstream) | table report_model brand brand2 brand3 what is wrong? What i need is something like this, | eval brand3=case(like(report_model, "cfg%"), grandstream, ...) Thanks,
Hi, I am new to Splunk. I have a stacked column chart after issuing this query:    (index="A" OR index="B") | chart count(Level) over _time span=1mon by Level usenull=f useother=f   where Le... See more...
Hi, I am new to Splunk. I have a stacked column chart after issuing this query:    (index="A" OR index="B") | chart count(Level) over _time span=1mon by Level usenull=f useother=f   where Level has four values: 1,2,3,4 The chart is like this:  As you can see the stack order is now 2 > 3 > 4 (from top to bottom), what if I want to reverse it to display like 4 > 3 > 2 (from top to bottom). I have seen similar questions here using transpose or reverse, I tried follow but still have no luck. Could anyone help? 
Requirement is to fetch values for all agentName and put it in a field.   Tried - 'agentName':\s(?<agentname>.*?,) but it pulls only first occurence.  Below is sample:   0,1,"[{'active': 0, 'met... See more...
Requirement is to fetch values for all agentName and put it in a field.   Tried - 'agentName':\s(?<agentname>.*?,) but it pulls only first occurence.  Below is sample:   0,1,"[{'active': 0, 'metricsAtStart': 'Jitter: 193.6 ms', 'metricsAtEnd': 'Jitter: 98.8 ms', 'agentId': 280961, 'agentName': 'BR15CORPTE01', 'dateStart': '2023-07-19 18:27:00', 'dateEnd': '2023-07-19 18:28:00', 'permalink': 'https://app.thousandeyes.com/alerts/list/?__a=243206&alertId=194913203&agentId=280961'}, {'active': 0, 'metricsAtStart': 'Jitter: 194.2 ms', 'metricsAtEnd': 'Jitter: 1.9 ms', 'agentId': 294526, 'agentName': 'US06CORPTE01', 'dateStart': '2023-07-19 18:23:00', 'dateEnd': '2023-07-19 18:28:00', 'permalink': 'https://app.thousandeyes.com/alerts/list/?__a=243206&alertId=194913203&agentId=294526'}, {'active': 1, 'metricsAtStart': 'Jitter: 100.2 ms', 'metricsAtEnd': '', 'agentId': 294566, 'agentName': 'US22CORPTE01', 'dateStart': '2023-07-19 18:28:00', 'permalink': 'https://app.thousandeyes.com/alerts/list/?__a=243206&alertId=194913203&agentId=294566'}, {'active': 0, 'metricsAtStart': 'Latency: 209 ms', 'metricsAtEnd': 'Latency: 142.9 ms', 'agentId': 337436, 'agentName': 'AR06CORPTE01', 'dateStart': '2023-07-19 18:26:00', 'dateEnd': '2023-07-19 18:27:00', 'permalink': 'https://app.thousandeyes.com/alerts/list/?__a=243206&alertId=194913203&agentId=337436'}]",194913203,2023-07-19 18:22:00,"[{'rel': 'related', 'href': 'https://api.thousandeyes.com/v6/tests/3271565'}, {'rel': 'data', 'href': 'https://api.thousandeyes.com/v6/net/metrics/3271565'}]",https://app.thousandeyes.com/alerts/list/?__a=243206&alertId=194913203,((avgLatency >
Could you please tell me why WinHostMon events are missing intermittently in Splunk? I dont see any Error in internal logs except Warning and Info. Thanks in advance  
Hello, community, I wanted to ask a fundamental question regarding specific logs collection. The question is: Do we really pull logs from the AD by sticking an agent on that AD DC machine/s? I hav... See more...
Hello, community, I wanted to ask a fundamental question regarding specific logs collection. The question is: Do we really pull logs from the AD by sticking an agent on that AD DC machine/s? I have a feeling and I am almost 100% sure that we collect logs from the core machine not at the AD App layer. Can someone confirm my assumptions, please, and how we actually pull out the AD logs? Thank you All!
Hello,  I have observed a strange issue in few of my universal forwarders. This is with Splunk addon for windows.  I have created scripts to check the disk and memory usage of the server and se... See more...
Hello,  I have observed a strange issue in few of my universal forwarders. This is with Splunk addon for windows.  I have created scripts to check the disk and memory usage of the server and sent it back to Splunk. This setup is very old and working on majority of the servers. These inputs are scheduled to run every 5 min and 30 min respectively.  Whenever I reload serverclass from my deployment server I can see 1 or 2 events coming in after that no data for either of the inputs.  When I check the internal logs I can see that the script is being executed successfully but no output/event can be seen in Splunk search.  I have tried direct execution of script in the input(writing the script in input stanza ) and saving in a path and calling from there as well.  Note - I have even tried creating a separate app and deploying from there for these 2 specific inputs but the behavior is same.  Can you please help me understand what is wrong and why it is giving blank output? 
I already have a clustered enterprise environment and I want to create an additional SH cluster for a dedicated purpose, but using the same IDX cluster. Do I need an additional Deployer server for t... See more...
I already have a clustered enterprise environment and I want to create an additional SH cluster for a dedicated purpose, but using the same IDX cluster. Do I need an additional Deployer server for this, or can I have the existing Deployer manage multiple SH clusters? How to do this? Thanks for any suggestions. R.  
From the below logs i want to capture DIM: data and CONSUMER: data using rex  i am not sure about rex command  much, please help on this. 2023-01-22 00:12:25,234 update [data work-0][DIM: [12344... See more...
From the below logs i want to capture DIM: data and CONSUMER: data using rex  i am not sure about rex command  much, please help on this. 2023-01-22 00:12:25,234 update [data work-0][DIM: [123445-hfj-347384738748378] DIS:{} OIT: [done] flow: [update] {CONSUMER: ITT | CONSUMERID: | STATE: START | REQ: GET UPDATE} data collected for : itt 2023-01-22 00:12:25,234 update [data work-0][DIM: [678965-hfj-987563245678908] DIS:{} OIT: [done] flow: [update] {CONSUMER: OIM | CONSUMERID: | STATE: START | REQ: GET UPDATE} data collected for : OIM 2023-01-22 00:12:25,234 update [data work-0][DIM: [094567-hfj-986342345678769] DIS:{} OIT: [done] flow: [update] {CONSUMER: ANBB | CONSUMERID: | STATE: START | REQ: GET UPDATE} data collected for : anbb