All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need help on the query:
All,  Just noticed when Splunk UF installs it creates a user "splunk" with a login shell /bin/bash in /etc/passwd.  e.g. splunk:1007:/bin/bash Is that needed? Can I switch it to a nologin? Anyone... See more...
All,  Just noticed when Splunk UF installs it creates a user "splunk" with a login shell /bin/bash in /etc/passwd.  e.g. splunk:1007:/bin/bash Is that needed? Can I switch it to a nologin? Anyone familiar with the impact of doing that? 
Hello there! I need help with a search that is not providing the expected results. Let me share the details and background information: This search provides the list of the Windows server's IPs foun... See more...
Hello there! I need help with a search that is not providing the expected results. Let me share the details and background information: This search provides the list of the Windows server's IPs found by a network discovery scan:   index=tenable sourcetype="tenable:sc:vuln" repository=DISCOVERY pluginID=11936 | rex "(?i)Remote operating system : (?P<os>[\D\d]+(?=Confidence level))" | rex "(?i)Confidence level : (?P<os_confidencial_level>[\d]+)" | makemv delim="\n" os | search os=*windows*server* | table ip dnsName os os_confidencial_level | dedup ip dnsName os   It delivers a total of 28806 IPs. This another search provides the list of the Windows server's IPs located in the CMDB:   index=snow_ci sourcetype=cmdb_ci_server SYS_CLASS_NAME="Windows Server" OPERATIONAL_STATUS!=Retired NOT IP_ADDRESS IN ("0.0.0.0", "255.255.255.255", "127.0.0.1", "169.254.*") earliest=-24h | regex IP_ADDRESS="^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$" | dedup IP_ADDRESS | rename IP_ADDRESS as ip | table ip   I get a total of 22845 IPs. This means that ideally the number of Windows servers in the shadow should be 28806 - 22845 = 5961 So I'm trying to get a similar value with this final search:   index=tenable repository=DISCOVERY sourcetype="tenable:sc:vuln" pluginID=11936 | rex "(?i)Remote operating system : (?P<os>[\D\d]+(?=Confidence level))" | rex "(?i)Confidence level : (?P<os_confidencial_level>[\d]+)" | makemv delim="\n" os | search os=*windows*server* | search NOT [ search index=snow_ci sourcetype=cmdb_ci_server SYS_CLASS_NAME="Windows Server" OPERATIONAL_STATUS!=Retired NOT IP_ADDRESS IN ("0.0.0.0", "255.255.255.255", "127.0.0.1", "169.254.*") earliest=-24h | regex IP_ADDRESS="^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$" | dedup IP_ADDRESS | rename IP_ADDRESS as ip | fields ip ] | table ip dnsName os os_confidencial_level | dedup ip dnsName os   But unfortunately I'm not getting the expected results. I should get the IPs included in the first search but NOT in the second one, not sure why but I'm getting many results (21025) with IPs from the subsearch too. While troubleshooting I have tried this: if at the end of the whole search we look for the IPs that are removed with the subsearch, if the subsearch is working fine, we should get 0 results, which is exactly what I get!   index=tenable repository=DISCOVERY sourcetype="tenable:sc:vuln" pluginID=11936 | rex "(?i)Remote operating system : (?P<os>[\D\d]+(?=Confidence level))" | rex "(?i)Confidence level : (?P<os_confidencial_level>[\d]+)" | makemv delim="\n" os | search os=*windows*server* | search NOT [ search index=snow_ci sourcetype=cmdb_ci_server SYS_CLASS_NAME="Windows Server" OPERATIONAL_STATUS!=Retired NOT IP_ADDRESS IN ("0.0.0.0", "255.255.255.255", "127.0.0.1", "169.254.*") earliest=-24h | regex IP_ADDRESS="^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$" | dedup IP_ADDRESS | rename IP_ADDRESS as ip | fields ip ] | table ip dnsName os os_confidencial_level | dedup ip dnsName os | search [ search index=snow_ci sourcetype=cmdb_ci_server SYS_CLASS_NAME="Windows Server" OPERATIONAL_STATUS!=Retired NOT IP_ADDRESS IN ("0.0.0.0", "255.255.255.255", "127.0.0.1", "169.254.*") earliest=-24h | regex IP_ADDRESS="^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$" | dedup IP_ADDRESS | rename IP_ADDRESS as ip | fields ip ]   So what is the issue here? This is driving me crazy so any help will be really appreciated. Thanks!
still a newbie so needing help in creating a dropdown list using html in a splunk form.  i will be using static values.  i originally had the input dropdown and it works and would now like to know if... See more...
still a newbie so needing help in creating a dropdown list using html in a splunk form.  i will be using static values.  i originally had the input dropdown and it works and would now like to know if this can be done in html.     thank you so much!   
It took me quite a while to get the search right, but I believe I have it returning the data that I would like to chart. The data looks something like this: _time SalesPerson NumberOfSales ... See more...
It took me quite a while to get the search right, but I believe I have it returning the data that I would like to chart. The data looks something like this: _time SalesPerson NumberOfSales 2/1/2021 Tom 54 2/1/2021 Steve 46 2/1/2021 Molly 23 1/31/2021 Brenda 12 1/31/2021 Tom 33 1/31/2021 Molly 30   The top 3 sales people and their number of sales are listed per day. I would like to create a visual like this: I would even settle for a Trellis split by day, but I can't seem to make that happen from this data either. My search is something like this: base search | bin span=1d _time | stats sum(NUMBER_OF_SALES) as NumberOfSales by _time, SalesPerson | sort -_time -NumberOfSales | dedup 3 _time
Hi All... As i am trying to find out the the long running search queries using this rest search, its working fine, but, its having a field "runDuration", as per the doc: runDuration Time in seconds ... See more...
Hi All... As i am trying to find out the the long running search queries using this rest search, its working fine, but, its having a field "runDuration", as per the doc: runDuration Time in seconds that the search took to complete. https://docs.splunk.com/Documentation/Splunk/8.1.1/RESTREF/RESTsearch#search.2Fjobs   but, it seems the runDuration values are totally wrong (the 1st value 22456767.32 is close to 259days) | rest /services/search/jobs splunk_server=* | table runDuration (sorted as per runDuration) runDuration 22456767.32 4493630.885 4364271.151000001 4156740.1780000003 4156682.699 4155523.87 4154739.233 4154733.224 4154682.228 4154629.832   ours is a indexer clustered, SH clustered, environment. i run this query at the monitoring console.  1. is the runDuration from the rest job is inaccurate/wrong? 2. apart from rest query, is there any other ways to find out the search run time please? i used -  index=_audit action="search" search=* NOT user="splunk-system-user" exec_time=* | table search total_run_time user result_count is_realtime host this looks like perfect one, but, its not having the user's timepicker info(what value the user used for earliest and latest times).. please suggest, thanks.
I have a dashboard panel where I'm trying to show how many users are experiencing a specific Event for the first time in the last x days. Right now I have the the search syntax set up where it will l... See more...
I have a dashboard panel where I'm trying to show how many users are experiencing a specific Event for the first time in the last x days. Right now I have the the search syntax set up where it will look at the last x days and will only show users who have NOT experienced that same event in the last 5 months. This works with relative time frames (in last 7 days) but doesn't work with absolute time frames with epoch values (Since 1/20/21 until now). Is there a way to modify the search so that it works with both types of time available from the time picker? Can I set a variable depending on the type of time selected from a time_picker input? For example, can I set a variable where if the input time_picker is "x days ago" it inserts the following into the search: | eval DAYSAGO=relative_time(now(),"-6d@d")  but if the input time_picker is "Since 1/27/2021 until now" it inserts this: | eval DAYSAGO=1611705600   index="index_summary" | stats earliest(EventTime) AS Earliest_TimeStamp, earliest(orig_time) AS Earliest_TimeStampEpoch, count(eval(EventId="148" OR EventId="170")) AS "Device Enrollments" by EnrollmentEmailAddress, DeviceFriendlyName, Platform | where 'Device Enrollments' < 6 | sort - "Device Enrollments" | eval DAYSAGO=relative_time(now(),"-6d@d") | where DAYSAGO < Earliest_TimeStampEpoch | stats count sum(EnrollmentEmailAddress) as "Users"    
I know it is possible to have multiple instances of splunk running on the same machine/server. My question is if there would be a conflict with the $SPLUNK_HOME variable being assigned to the same u... See more...
I know it is possible to have multiple instances of splunk running on the same machine/server. My question is if there would be a conflict with the $SPLUNK_HOME variable being assigned to the same user "splunk". How can such a configuration be made possible on linux? I have a Search Head instance installed at /data/splunk/ and the home directory ($SPLUNK_HOME) for the "splunk" user is /data/splunk/bin/. I need to install a Forwarder on the same machine at /data2/splunkforwarder/ and the user will still be "splunk".  Will there be no conflict with 2 different home directories being assigned to the same user? Is such an implementation possible?
Hi, I'm new to splunk so pardon if its a straightforward query I want to extract userIds from my first index and check how many does not exist in second index Example: index=auth-app would have fie... See more...
Hi, I'm new to splunk so pardon if its a straightforward query I want to extract userIds from my first index and check how many does not exist in second index Example: index=auth-app would have field like  UID: H0XF7PQU1 So, I want to extract H0XF7PQU1 from first query and check if it exist in second query (index=main-app) and get count of ids that exist one first index but not in second.  Conceptually, I want to get count of users that passed authentication (first index) but still did not make it to main application (second index)
Using 'delta' I am able to figure this out, but in one time direction.  Now I need the other time direction. In the current event, I essentially need to get the answer to: Is there another event wit... See more...
Using 'delta' I am able to figure this out, but in one time direction.  Now I need the other time direction. In the current event, I essentially need to get the answer to: Is there another event within X seconds (both forwards and backwards) of the current event. Is there a way to do this?
Hi, I have vulnerability dataset. Each vulnerability comes with a score from the scanning tool. Scanning tool has its own calculation and assigns a severity based on that. We on customer side, calc... See more...
Hi, I have vulnerability dataset. Each vulnerability comes with a score from the scanning tool. Scanning tool has its own calculation and assigns a severity based on that. We on customer side, calculate Severity based on customer defined score ranges below: 9.0-10.0 > CRITICAL 7.0-8.9 > HIGH 4.0-6.9 > MEDIUM 0.1-3.9 > LOW 0.0 > NONE Issue is that when data comes from source/scanning tool, it has its own severities which are not always lined up with the above ranges. Our Score ranges above is the main root guideline to use. Example: Often times, Severity from data does not match the Score that is passed by the tool as I mentioned above. A Severity of MAJOR in data coming with a Score of 3.0. A Severity of MINOR in data coming with a Score of 3.0. A Severity of CRITICAL in data coming with a Score of 0.0. A Severity of CRITICAL in data coming with a Score of 10.0 (This is correct and inline with our ranges above) I need both of the options below: Desired output 1 (based on score ranges): SEVERITY_Data  Score_Data   Severity_Adjusted_Score   Severity_Adjusted_Code MAJOR                             3.0                       Median of 0.1-3.9                               LOW Desired output 2 (based on SEVERITY_Data e.g. value is MAJOR): SEVERITY_Data   Score_Data   Severity_Adjusted_Score   Severity_Adjusted_Code MAJOR                               3.0                      Median of 7.0-8.9                               HIGH Likewise for the rest of the severities and score ranges. Thanks in advance!!!
Hi, I am trying to see some business transactions via the controller but the UI page spinner keeps running and nothing is displayed as soon as I click 'Top Business Transactions' option on the con... See more...
Hi, I am trying to see some business transactions via the controller but the UI page spinner keeps running and nothing is displayed as soon as I click 'Top Business Transactions' option on the controller. I experience same behavior after I choose 'AppDynamics Agents' or 'Admin' options from the top right navigation bar. Please help. I am seeing this error in agent log:- [logback-1] 01 Feb 2021 07:22:52,947 WARN AgentErrorProcessor - Agent error occurred, [name,transformId]=[com.singularity.tm.NewTransactionDelegate - java.lang.NullPointerException,585] [logback-1] 01 Feb 2021 07:22:52,947 WARN AgentErrorProcessor - 4 instance(s) remaining before error log is silenced [logback-1] 01 Feb 2021 07:22:52,947 WARN AgentErrorProcessor - 499 instance(s) remaining before instrumentation point is targeted for neutralization [logback-1] 01 Feb 2021 07:22:52,947 ERROR NewTransactionDelegate - Error in endContinuingTransactionAndRemoveCurrentThread java.lang.NullPointerException at com.singularity.ee.util.r.a(r.java:46) at com.singularity.ee.agent.appagent.services.transactionmonitor.common.lh.jb(lh.java:1119) at com.singularity.ee.agent.appagent.services.transactionmonitor.common.mf.a(mf.java:234) at com.singularity.ee.agent.appagent.services.transactionmonitor.common.mc.endContinuingTransactionAndRemoveCurrentThread(mc.java:286) at com.singularity.ee.agent.appagent.services.transactionmonitor.l.a(l.java:304) at com.singularity.ee.agent.appagent.services.bciengine.b.onMethodEnd(b.java:59) at com.singularity.ee.agent.appagent.kernel.bootimpl.FastMethodInterceptorDelegatorImpl.safeOnMethodEndNoReentrantCheck(FastMethodInterceptorDelegatorImpl.java:497) at com.singularity.ee.agent.appagent.kernel.bootimpl.FastMethodInterceptorDelegatorImpl.safeOnMethodEnd(FastMethodInterceptorDelegatorImpl.java:425) at com.singularity.ee.agent.appagent.entrypoint.bciengine.FastMethodInterceptorDelegatorBoot.safeOnMethodEnd(FastMethodInterceptorDelegatorBoot.java:124) at com.singularity.ee.agent.appagent.entrypoint.bciengine.FastMethodInterceptorDelegatorBoot.safeOnMethodEndNormal(FastMethodInterceptorDelegatorBoot.java:107) at ch.qos.logback.core.rolling.helper.TimeBasedArchiveRemover$ArhiveRemoverRunnable.run(TimeBasedArchiveRemover.java:250) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Thanks.
Hello Splunkers: I'm looking to determine how many days file is out of date. I have two strftime fields and values: x = 1612285190.000 y = 1612303190.000000 I need to calculate the numbe... See more...
Hello Splunkers: I'm looking to determine how many days file is out of date. I have two strftime fields and values: x = 1612285190.000 y = 1612303190.000000 I need to calculate the number of days between x and y, something like x - y = z.   I tried:  | eval z=x-y  y calculates to -18000.00 I tried converting this using: | eval x=strftime(z, "d%") and I get 31.  Which seems to be the 31st day of the month.   Thanks in advance.
Some data has missing from splunk db to index, kindly explain how to troubleshoot
Hello Everyone, I need a help in setting up Splunk Alert to generate an alert if we get 60 errors per minute over a 5 minute period
Hello, i read hundreds of articels, but its not working well. i try to gather data through telegraf from my snmp devices or other linux devices.  First i install on a linux device telegraf 1.17. Cr... See more...
Hello, i read hundreds of articels, but its not working well. i try to gather data through telegraf from my snmp devices or other linux devices.  First i install on a linux device telegraf 1.17. Create a simple input file       [[inputs.diskio]]       and a file output         [[outputs.file]] ## Files to write to, "stdout" is a specially handled file. files = ["stdout", "/var/snmplog/metrics.out"] data_format = "splunkmetric" splunkmetric_hec_routing = false         On the universal forwarder where my telegraf is running on, i create a inputs.conf stanza for the metrics.out file       [monitor:///var/snmplog/*.out] disabled = false index = telegraf sourcetype = telegraf         and in my company app a props.conf stanza for the sourcetype telegraf       [telegraf] category = Metrics description = Telegraf Metrics pulldown_type = 1 DATETIME_CONFIG = NO_BINARY_CHECK = true SHOULD_LINEMERGE = true disabled = false INDEXED_EXTRACTIONS = json KV_MODE = none TIMESTAMP_FIELDS = time TIME_FORMAT = %s.%3N LINE_BREAKER = ([\r\n]+)         I tried to create the index "telegraf" as event index and also as "metrics" index. What is the right type of index for telegraf sending as splunkmetrics? Running with metrics index type i dont see any events. Running with event type index, i see events, but no fields are extracted and i have a big event with hundreds of values.       {"_value":103672,"metric_name":"diskio.weighted_io_time","name":"loop2","time":1612282330}{"_value":0,"metric_name":"diskio.writes","name":"loop2","time":1612282330}{"_value":47713280,"metric_name":"diskio.read_bytes","name":"loop2","time":1612282330}{"_value":0,"metric_name":"diskio.write_bytes","name":"loop2","time":1612282330}{"_value":114060,"metric_name":"diskio.read_time","name":"loop2","time":1612282330}{"_value":0,"metric_name":"diskio.write_time","name":"loop2","time":1612282330}{"_value":3872,"metric_name":"diskio.io_time","name":"loop2","time":1612282330}{"_value":39751,"metric_name":"diskio.reads","name":"loop2","time":1612282330}{"_value":0,"metric_name":"diskio.iops_in_progress","name":"loop2","time":1612282330}{"_value":0,"metric_name":"diskio.merged_reads","name":"loop2","time":1612282330}{"_value":0,"metric_name":"diskio.merged_writes","name":"loop2","time":1612282330}{"_value":0,"metric_name":"diskio.writes","name":"loop3","time":1612282330}{"_value":0,"metric_name":"diskio.write_bytes","name":"loop3","time":1612282330}{"_value":0,"metric_name":"diskio.write_time","name":"loop3","time":1612282330}{"_value":0,"metric_name":"diskio.io_time","name":"loop3","time":1612282330}{"_value":0,"metric_name":"diskio.merged_reads","name":"loop3","time":1612282330}{"_value":0,"metric_name":"diskio.merged_writes","name":"loop3","time":1612282330}{"_value":20,"metric_name":"diskio.reads","name":"loop3","time":1612282330}{"_value":32768,"metric_name":"diskio.read_bytes","name":"loop3","time":1612282330}{"_value":0,"metric_name":"diskio.read_time","name":"loop3","time":1612282330}{"_value":0,"metric_name":"diskio.weighted_io_time","name":"loop3","time":1612282330}{"_value":0,"metric_name":"diskio.iops_in_progress","name":"loop3","time":1612282330}{"_value":13287963,"metric_name":"diskio.reads","name":"sda","time":1612282330}{"_value":26131311,"metric_name":"diskio.writes","name":"sda","time":1612282330}{"_value":315109941248,"metric_name":"diskio.read_bytes","name":"sda","time":1612282330}{"_value":636415049728,"metric_name":"diskio.write_bytes","name":"sda","time":1612282330}{"_value":53542148,"metric_name":"diskio.write_time","name":"sda","time":1612282330}{"_value":0,"metric_name":"diskio.iops_in_progress","name":"sda","time":1612282330}{"_value":83744736,"metric_name":"diskio.read_time","name":"sda","time":1612282330}{"_value":30181608,"metric_name":"diskio.io_time","name":"sda","time":1612282330}{"_value":137261312,"metric_name":"diskio.weighted_io_time","name":"sda","time":1612282330}{"_value":142855,"metric_name":"diskio.merged_reads","name":"sda","time":1612282330}{"_value":36168488,"metric_name":"diskio.merged_writes","name":"sda","time":1612282330}{"_value":0,"metric_name":"diskio.iops_in_progress","name":"sda1","time":1612282330}{"_value":0,"metric_name":"diskio.writes","name":"sda1","time":1612282330}{"_value":48879616,"metric_name":"diskio.read_bytes","name":"sda1","time":1612282330}{"_value":3652,"metric_name":"diskio.read_time","name":"sda1","time":1612282330}{"_value":0,"metric_name":"diskio.write_time","name":"sda1","time":1612282330}{"_value":3652,"metric_name":"diskio.weighted_io_time","name":"sda1","time":1612282330}{"_value":1743,"metric_name":"diskio.reads","name":"sda1","time":1612282330}{"_value":0,"metric_name":"diskio.write_bytes","name":"sda1","time":1612282330}{"_value":3652,"metric_name":"diskio.io_time","name":"sda1","time":1612282330}{"_value":0,"metric_name":"diskio.merged_reads","name":"sda1","time":1612282330}{"_value":0,"metric_name":"diskio.merged_writes","name":"sda1","time":1612282330}{"_value":636415049728,"metric_name":"diskio.write_bytes","name":"sda2","time":1612282330}{"_value":53542148,"metric_name":"diskio.write_time","name":"sda2","time":1612282330}{"_value":30176468,"metric_name":"diskio.io_time","name":"sda2","time":1612282330}{"_value":0,"metric_name":"diskio.iops_in_progress","name":"sda2","time":1612282330}{"_value":142855,"metric_name":"diskio.merged_reads","name":"sda2","time":1612282330}{"_value":36168488,"metric_name":"diskio.merged_writes","name":"sda2","time":1612282330}{"_value":13282111,"metric_name":"diskio.reads","name":"sda2","time":1612282330}{"_value":314988112896,"metric_name":"diskio.read_bytes","name":"sda2","time":1612282330}{"_value":83727040,"metric_name":"diskio.read_time","name":"sda2","time":1612282330}{"_value":137243956,"metric_name":"diskio.weighted_io_time","name":"sda2","time":1612282330}{"_value":26131311,"metric_name":"diskio.writes","name":"sda2","time":1612282330}{"_value":0,"metric_name":"diskio.merged_writes","name":"loop0","time":1612282330}{"_value":0,"metric_name":"diskio.write_bytes","name":"loop0","time":1612282330}{"_value":0,"metric_name":"diskio.merged_reads","name":"loop0","time":1612282330}{"_value":17583104,"metric_name":"diskio.read_bytes","name":"loop0","time":1612282330}{"_value":42544,"metric_name":"diskio.read_time","name":"loop0","time":1612282330}{"_value":0,"metric_name":"diskio.write_time","name":"loop0","time":1612282330}{"_value":1432,"metric_name":"diskio.io_time","name":"loop0","time":1612282330}{"_value":36456,"metric_name":"diskio.weighted_io_time","name":"loop0","time":1612282330}{"_value":0,"metric_name":"diskio.iops_in_progress","name":"loop0","time":1612282330}{"_value":12320,"metric_name":"diskio.reads","name":"loop0","time":1612282330}{"_value":0,"metric_name":"diskio.writes","name":"loop0","time":1612282330}{"_value":0,"metric_name":"diskio.write_bytes","name":"loop1","time":1612282330}{"_value":162976,"metric_name":"diskio.read_time","name":"loop1","time":1612282330}{"_value":0,"metric_name":"diskio.iops_in_progress","name":"loop1","time":1612282330}{"_value":0,"metric_name":"diskio.merged_reads","name":"loop1","time":1612282330}{"_value":0,"metric_name":"diskio.writes","name":"loop1","time":1612282330}{"_value":36114432,"metric_name":"diskio.read_bytes","name":"loop1","time":1612282330}{"_value":5880,"metric_name":"diskio.io_time","name":"loop1","time":1612282330}{"_value":139632,"metric_name":"diskio.weighted_io_time","name":"loop1","time":1612282330}{"_value":0,"metric_name":"diskio.merged_writes","name":"loop1","time":1612282330}{"_value":26595,"metric_name":"diskio.reads","name":"loop1","time":1612282330}{"_value":0,"metric_name":"diskio.write_time","name":"loop1","time":1612282330}         What goes wrong? What is missing? Can someone help me?  Thanks  best regards Stefan    
Hello Splunkers, I have the following field with a date/time stamp:  2021-02-02 15:58:34.0 I am trying to convert it to a format such as: 1612279887.000  Ultimately it is my goal to use it in a ... See more...
Hello Splunkers, I have the following field with a date/time stamp:  2021-02-02 15:58:34.0 I am trying to convert it to a format such as: 1612279887.000  Ultimately it is my goal to use it in a calculation. I'm using the following search: search | fields status_dt | table status_dt | rename status_dt AS "x" | eval y=strptime(x, "%m/%d/%y %H:%M:%S") | eval z= strftime(y,"%d %b %Y") | table x y z But all I get is a table that looks like: x                                                        y                    z 2021-02-02 15:58:34.0 What am I doing wrong?  Thanks in advanced
Hi, We have below type of logs: Log1-- 2021-02-02 10:12:49.889, APP_NAME="com.abcdef.abcdefghijkl", APP_TEMP_NAME="com.abcdef.abcdefghijkl", APP_TEMP_VER="1.0.11.20210120114351539", LASTDEPLOYED="2... See more...
Hi, We have below type of logs: Log1-- 2021-02-02 10:12:49.889, APP_NAME="com.abcdef.abcdefghijkl", APP_TEMP_NAME="com.abcdef.abcdefghijkl", APP_TEMP_VER="1.0.11.20210120114351539", LASTDEPLOYED="2021-01-27 13:41:12.389", ENV_NAME="ABCEnvironment_AB" Log2-- 2021-02-02 10:12:49.889, APP_NAME="com.abcdef.st.xyz", APP_TEMP_NAME="com.abcdef.st.xyz-1", APP_TEMP_VER="1.1.4", LASTDEPLOYED="2018-11-18 05:59:44.333", ENV_NAME="ABCEnvironment_CD" From here I want to extract the below fields with separate rex commands for each. APP_NAME, APP_TEMP_NAM, APP_TEMP_VER, LASTDEPLOYED, ENV_NAME But I am unable to create the rex commands as expected. Can someone please help me in creating the rex commands..?
Hello, I am a noob at Splunk. I know there are a few posts on this already but I'm not able to find a solution for my specific problem. I want to make an alert for when indexing stops. I am using th... See more...
Hello, I am a noob at Splunk. I know there are a few posts on this already but I'm not able to find a solution for my specific problem. I want to make an alert for when indexing stops. I am using the following: | tstats latest(_time) as latest where index=* by host | where latest < relative_time(now(), "-1m") Normally, I want the "-1m" to be "-1d" but i changed it to test the alert. I can see on the search that I have a result from an ip address that is not indexing events every minute so I know the search is working. When I save it as an alert however, I get no alerts. I have tried real-time and scheduled alerts to attempts a trigger. Does anyone know why the alert doesnt work or if there is something off with the search I am trying to use?   Thanks!
We are receiving messages about how our indexers (distributed environment) doesn't meet the minimum system requirements, but after taking a further look at Splunk's reference hardware documentation (... See more...
We are receiving messages about how our indexers (distributed environment) doesn't meet the minimum system requirements, but after taking a further look at Splunk's reference hardware documentation (https://docs.splunk.com/Documentation/Splunk/8.1.1/Capacity/Referencehardware) I still can't seem to figure out where we are lacking in.  The message that I'm referring to is the following: "Health Check: Splunk server "server_name" does not meet the recommended minimum system requirements." This is currently what we're using for all three indexers: 64-bit Linux , 16 CPU cores,  and15.66 RAM.  If anyone could provide some guidance on this matter that would be greatly appreciated!