All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I would like the results of a search to populate the allow/block lists in TrackeMe. The lookup file requires a unique key. Any tips on how to generate that at search time?
Good morning. Trying to replace a "\" (backslash) from a string.  Below is my example ... # Perform Global Replace for "&dir+c:\ 443" SEDCMD-replace_backslash_1 = s/\&\w+\+\w\:\\\s443/&dir+c: 443/... See more...
Good morning. Trying to replace a "\" (backslash) from a string.  Below is my example ... # Perform Global Replace for "&dir+c:\ 443" SEDCMD-replace_backslash_1 = s/\&\w+\+\w\:\\\s443/&dir+c: 443/g For some reason, the pattern match is not able to detect the backslash.  Are there any special considerations when trying to remove a backslash? Regards, Max  
I am trying to align the single value result to the left, and below are the CSS versions I have tried, but neither of them are working: .single-value .single-result { text-align: left !important; ... See more...
I am trying to align the single value result to the left, and below are the CSS versions I have tried, but neither of them are working: .single-value .single-result { text-align: left !important; } .dashboard-row .dashboard-panel .panel-body .splunk-single { width: 30%; text-align: right !important; }  
Hi All,   I am looking to configure a sox app on splunk, so wanted to know if it is possible  to restrict a user/s to only view logs from a specific forwarder/s in the indexer? Thanks! Rahul Gade... See more...
Hi All,   I am looking to configure a sox app on splunk, so wanted to know if it is possible  to restrict a user/s to only view logs from a specific forwarder/s in the indexer? Thanks! Rahul Gadepalli
Hi , I'm trying to build a single value dashboard for certain metrics. I would like to put it in the form of a timechart so I can have a trend value. However this search gives me no result :  ... See more...
Hi , I'm trying to build a single value dashboard for certain metrics. I would like to put it in the form of a timechart so I can have a trend value. However this search gives me no result :       | tstats `summariesonly` min(_time) as firstTime,max(_time) as lastTime,count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.signature,Vulnerabilities.dest, Vulnerabilities.severity | `drop_dm_object_name("Vulnerabilities")` | where firstTime!=lastTime AND severity!="informational" | eval age=round((lastTime-firstTime)/86400) | timechart span=30d avg(age) by lastTime       Which is strange because I feel like this command is almost the same :       | tstats `summariesonly` min(_time) as firstTime,max(_time) as lastTime,count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.signature,Vulnerabilities.dest, Vulnerabilities.severity | `drop_dm_object_name("Vulnerabilities")` | where firstTime!=lastTime AND severity!="informational" | eval age=round((lastTime-firstTime)/86400) | bucket lastTime span=30d | stats avg(age) by lastTime       And this one returns me the results that I want. Could anybody help me out getting a timechart out of this ?
Hi   can anyone help me with a lookup table i have a 2 column lookup with column headings IPs and URLs, and i want to see if information in either csv field appears in the index data at all some ... See more...
Hi   can anyone help me with a lookup table i have a 2 column lookup with column headings IPs and URLs, and i want to see if information in either csv field appears in the index data at all some fields on the csv just have a url, some just have an ip , some both is there a search string that will search the contents of either column against data held in index Thanks in advance Helppppp
Hi Team, I wanted to set up alert in Splunk cloud for windows machines when CPU% of a single process is greater than 90. Please do help how to write the Query My below query is not working properl... See more...
Hi Team, I wanted to set up alert in Splunk cloud for windows machines when CPU% of a single process is greater than 90. Please do help how to write the Query My below query is not working properly as expected. index="index" host="windows" source="WMI:ProcessesCPU" | WHERE NOT Name="_Total" | WHERE NOT Name="System" | WHERE NOT Name="Idle"| streamstats dc(_time) as distinct_times | head (distinct_times == 1) | stats latest(PercentProcessorTime) as CPU% by Name | sort -ProcessorTime |eval AlertStatus=if('CPU%'> 90, "Alert", "Ignore") |search AlertStatus="Alert" wmi.conf file configuration: [WMI:ProcessesCPU] interval = 60 wql = SELECT Name, PercentProcessorTime, PercentPrivilegedTime, PercentUserTime, ThreadCount FROM Win32_PerfFormattedData_PerfProc_Process WHERE PercentProcessorTime>0 disabled = 0 Total of all CPU processes in the windows machine should be 100% as we see in task manager. But I'm getting 100 % for each process which is wrong. Could some one please help
I'm not sure why the below search works fine on the search page but gives "Search did not return any events." in the dashboard. (In Search page I'm using single $ sign) | rest /servicesNS/-/-/saved/... See more...
I'm not sure why the below search works fine on the search page but gives "Search did not return any events." in the dashboard. (In Search page I'm using single $ sign) | rest /servicesNS/-/-/saved/searches | search disabled=0 is_scheduled=1 | eval encodedtitle=title | eval encodedtitle=urlencode | replace " " with "%20", "," with "%2C", "'" with "%27" in urlencode encodedtitle | rename "alert.severity" as severity, "dispatch.earliest_time" as earliest_time, "dispatch.latest_time" as latest_time, "eai:acl.app" as app | fields title, encodedtitle, disabled, severity, cron_schedule, description, earliest_time, latest_time, app, is_scheduled, next_scheduled_time, triggered_alert_count | map maxsearches=1000 search="| rest /servicesNS/-/-/alerts/fired_alerts/$$encodedtitle$$ | dedup savedsearch_name sortby -trigger_time | table trigger_time_rendered, trigger_time | eval title=$$title$$, disabled=$$disabled$$, severity=$$severity$$, cron_schedule=$$cron_schedule$$, description=$$description$$, earliest_time=$$earliest_time$$, latest_time=$$latest_time$$, app=$$app$$, is_scheduled=$$is_scheduled$$, next_scheduled_time=$$next_scheduled_time$$, triggered_alert_count=$$triggered_alert_count$$" | append [| makeresults | eval test="test"]   What is more surprising is, in the dashboard, even the value with make results is not showing. And there is no errors in the search.log. I'm using Splunk version 8.0.4.1
I'm getting the following errors in the splunkd.log on my Splunk Enterprise server after configuring an input...   09-21-2020 08:48:16.733 -0400 ERROR ExecProcessor - message from "python "D:\Pro... See more...
I'm getting the following errors in the splunkd.log on my Splunk Enterprise server after configuring an input...   09-21-2020 08:48:16.733 -0400 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\Splunk_TA_jmx\bin\jmx.py"" Sep 21, 2020 8:48:16 AM org.exolab.castor.mapping.Mapping loadMapping 09-21-2020 08:48:16.733 -0400 ERROR ExecProcessor - message from "python "D:\Program Files\Splunk\etc\apps\Splunk_TA_jmx\bin\jmx.py"" INFO: Loading mapping descriptors from jar:file:/D:/Program%20Files/Splunk/etc/apps/Splunk_TA_jmx/bin/lib/jmxmodinput.jar!/mapping.xml   Splunk Version7.3.5 App Version 4.0.0 I am able to search for data from the input, but the logging of the above messages is cluttering up the log.  Any assistance on how to address the above errors would be appreciated.
Goal is to subtract file counts of folders from sites MAIN and BACK. Sample data   | makeresults | eval f="MAIN-AAA", val="17313" | append [| makeresults | eval f="BACK-AAA", val="17357"] | append... See more...
Goal is to subtract file counts of folders from sites MAIN and BACK. Sample data   | makeresults | eval f="MAIN-AAA", val="17313" | append [| makeresults | eval f="BACK-AAA", val="17357"] | append [| makeresults | eval f="MAIN-BBB", val="682"] | append [| makeresults | eval f="BACK-BBB", val="682"] | append [| makeresults | eval f="MAIN-CCC", val="38767"] | append [| makeresults | eval f="BACK-CCC", val="38804"] | eval site=substr(f,1,4) | eval folder=substr(f,6)   Output should be something like   folder,count MAIN,count BACK,difference AAA,17313,17357,4 BBB,682,682,0 CCC,38804,38767,37   thanks
I have written the below query, but I am not getting any result for this query: |stats values(val) as val by category |map search="| dbxquery connection=SplunkToHive query="select * from dat where... See more...
I have written the below query, but I am not getting any result for this query: |stats values(val) as val by category |map search="| dbxquery connection=SplunkToHive query="select * from dat where year='2020' and month='08' and day='24' and name like '%$category$%'"  |  table category, value"   Can someone please help me ?
since one of the username need to be simulate with regex query . I am forced to use regex how can I do it so that I simulate kind of OR condition between main and sub search query index=main suser... See more...
since one of the username need to be simulate with regex query . I am forced to use regex how can I do it so that I simulate kind of OR condition between main and sub search query index=main suser IN("abc","def") | search  regex suser =”DEF[0-9]" AND EventID IN("323","322")   Thanks      
I have tried using answers to similar questions on here, but I'm having a problem where I want to create a column of 4 labels. However, when I try to create these, the labels I make eat into re-label... See more...
I have tried using answers to similar questions on here, but I'm having a problem where I want to create a column of 4 labels. However, when I try to create these, the labels I make eat into re-labeling the first label I have assigned. For example, I am looking to create a label a column like this:   Gene Feature1 Feature2 Feature3 ... label Gene1 1 3 1 most likely Gene2 0 0 1 probable Gene3 NA NA NA unknown Gene4 0 0 0 unlikely   However, my data is imported from big data analysis and so my features are not represented here, but the 4 labels are what I'm trying to get. I try to code this with:   df$label[(df$Mechanism == 1)|(df$med >= 3) |(df$OMIM == 1)] <- "most likely" df$label[is.na(df$label) & (df$med <= 2 )|(df$SideeffectFreq>=1) |(df$MGI_Gene==1) |(df$model_Gene==1) |(df$Rank>=1) ] <- "probable" df$label[(df$Causality == 'least likely')] <- "least likely" df$label[is.na(df$label)] <- "unknown"   When I run the first line to create the "most likely" label, this labels 50 genes (which is what I expected), but running the second line for "probable" re-labels some of the "most likely" genes to only give 34 of them left. I thought using is.na(df$label) or (df$label != 'most likely') would resolve this, but neither do. Is there a better way to go about creating a labels column like this? I am new to coding so also if anyone can explain why the is.na(df$label) or (df$label != 'most likely') do not work as I expected that would also be really helpful. Edit: Example where 'most likely' label is taken up:   #Input data: dput(dt) structure(list(Gene = c("gene1", "gene2", "gene3", "gene4"), F1 = c(1L, 0L, 0L, 1L), F2 = c(3L, 0L, 0L, 1L), F3 = c("1", "1", "1", "least likely"), label = c(NA, NA, NA, NA)), row.names = c(NA, -4L), class = c("data.table", "data.frame")) dt$label[(dt$F1 == 1)|(dt$F2 >= 3) |(dt$F1 == 1)] <- "most likely" dt$label[(dt$label != 'most likely') & (dt$F1 == 2)|(dt$F2 == 0) |(dt$F1 == 1)] <- "probable" dt$label[(dt$F1 == 0)|(dt$F2 == 0)] <- "unlikely" dt$label[(dt$F3 == 'least likely')] <- "unknown"  
Hello, I wanted to add colours to the dashboard panel. Please find below the dashboard code and screenshot. There are hyperlinks in each panel and I need to add colors to the background of each pane... See more...
Hello, I wanted to add colours to the dashboard panel. Please find below the dashboard code and screenshot. There are hyperlinks in each panel and I need to add colors to the background of each panel. Any help would be appreciated. Adding the dashboard code also below. @niketn  <dashboard stylesheet="layout.css" theme="light"> <label>Test-Landing dashboard </label> <description>Links to all other dashboards</description> <row> <panel depends="$alwaysHideHTMLStyle$"> <html> <style> body{margin-bottom: 150px;background: #FFDAB9!important;} .dashboard-body{background-color:#FFDAB9!important; font-weight:bold!important} </style> <p> <center><a href="https://abc.splunkcloud.com/en-GB/app/project_base-Server/30min_block" style="color:black;" alignment="central">30 MIN BLOCK</a></center> </p> </html> </panel> <panel> <html> <p> <center><a href="https://abc.splunkcloud.com/en-GB/app/project_base-Server/high_disk_" style="color:black;">High Disk %</a></center> </p> </html> </panel> <panel> <html> <p> <center><a href="https://abc.splunkcloud.com/en-GB/app/project_base-Server/cpu_utilization_ae6hg218" style="color:black;">CPU Utilization AE6HG218</a></center> </p> </html> </panel> <panel> <html> <p> <center><a href="https://abc.splunkcloud.com/en-GB/app/project_base-Server/30min_block" style="color:black;">Server - Dashboard</a></center> </p> </html> </panel> <panel> <html> <p> <center><a href="https://abc.splunkcloud.com/en-GB/app/project_base-Server/sql_agent_not_running_in_Server_servers" style="color:black;">SQL Agent Not Running in Server Servers</a></center> </p> </html> </panel> </row> <row> <panel> <html> <p> <center><a href="https://abc.splunkcloud.com/en-GB/app/project_base-Server/high_cpu" style="color:black;">High CPU</a></center> </p> </html> </panel> <panel> <html> <p> <center><a href="https://abc.splunkcloud.com/en-GB/app/project_base-Server/sql_serv_not_running" style="color:black;">SQL Serv Not Running</a></center> </p> </html> </panel> <panel> <html> <p> <center><a href="https://abc.splunkcloud.com/en-GB/app/project_base-Server/sql_server_cpu_usage" style="color:black;">SQL Server CPU UsageK</a></center> </p> </html> </panel> <panel> <html> <p> <center><a href="https://abc.splunkcloud.com/en-GB/app/project_base-Server/job_fail" style="color:black;">JOB FAIL</a></center> </p> </html> </panel> <panel> <html> <p> <center><a href="https://abc.splunkcloud.com/en-GB/app/project_base-Server/long_tran" style="color:black;">LONG TRAN</a></center> </p> </html> </panel> </row> </dashboard>  
I am dynamically extracting a sourctype using props.conf and tranform.conf file. But the extraction is not working as expected.  The soucetype i am extracting is "eu_test_splunktest_internal_dev" bu... See more...
I am dynamically extracting a sourctype using props.conf and tranform.conf file. But the extraction is not working as expected.  The soucetype i am extracting is "eu_test_splunktest_internal_dev" but it seems the splunk is only displaying "eu_test_ "as a sourctype and it's trimming rest of the part. Is there a splunk offical page which defines any kind of restriction on sourctype name  or i can have the mentioned name as a sourctype?  
I have a table like below each row is having different metrics. The requirement is  If metric is "User count" then I need to set color range for Value column like this Green=0-10 Red=10-20 Yellow=20... See more...
I have a table like below each row is having different metrics. The requirement is  If metric is "User count" then I need to set color range for Value column like this Green=0-10 Red=10-20 Yellow=20-30 etc. If metric is "Day count" then I need to set color range for Value column like this Green=0-100 Red=100-200 Default=Yellow So for each Metrics the Value column range is going to differ. How to achieve this Metric 1 Value User count 30 Age count 42 Days  count 25
Hello, I am currently struggling with some SPL search command..   I want to show on table about resource's usage data.   The current spl command is like below.   MY_SEARCH_COMMAND | eval reso... See more...
Hello, I am currently struggling with some SPL search command..   I want to show on table about resource's usage data.   The current spl command is like below.   MY_SEARCH_COMMAND | eval resource_a_total = a_val * threshold | eval resource_a_total = b_val * threshold | stats sum(resource_a_used) as a_used, sum(resource_a_total) as a_total, sum(resource_b_used) as b_used, sum(resource_b_total) as b_total by cluster | eval a_usage =round(a_used / a_total * 100, 2) | eval b_usage =round(a_used / a_total * 100, 2) | table name a_usage b_usage     As you can see, to get usage data, have to calculate each hosts total / used data value and then do aggregation based on same cluster name.   In this situation, I want  to add difference of usage data from yesterday to today. If yesterday's resource_a usage is bigger than today, then table should write resource_a_diff like below.   name            |      a_usage   |      a_diff  |  b_usage   |  b_diff clusterA      |         80            |     -5%p    |        70         |   5%p   How do I write the statement as efficient way? 
Hi.   I have just been presented with a very curious timestamp format.     18-08-2020 15:41:00,07 No running service instances found 0 service instances are online, which is different from the... See more...
Hi.   I have just been presented with a very curious timestamp format.     18-08-2020 15:41:00,07 No running service instances found 0 service instances are online, which is different from the expected count of 1. 18-08-2020 15:46:00,13 No running service instances found 0 service instances are online, which is different from the expected count of 1.       My first instinct is to create a timeformat = %d-%m%Y\n%H:%M:%S,%2N, but is it at all possible to mix regex with strptime() formats? Has anybody encountered something like it, and what solution did you come up with.
Can i change my dashboard to appear in Japanese language, it tried changing en_US to  ja-JP in URL that change the default fields , but how to show DASHBOARD DATA and panels titles in japanese langua... See more...
Can i change my dashboard to appear in Japanese language, it tried changing en_US to  ja-JP in URL that change the default fields , but how to show DASHBOARD DATA and panels titles in japanese language , this kind of conversion is possible  ?
Splunk is not getting all VM logs from the ESXi server. Is there any way to know how many VM's are present in ESXi server without logs?