All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is it possible to create a table of dashboard then have health status widget on each cell, to maintain the size and placement of widgets across We are building a dashboard for around 60 application... See more...
Is it possible to create a table of dashboard then have health status widget on each cell, to maintain the size and placement of widgets across We are building a dashboard for around 60 applications, based on a particular HR status. Something like attached. It is very time consuming to keep adjusting the size and placement of each widgets.                                                                 
Hi all, I'm to trying to set an email alert notification using Splunk enterprise 9.0 but I am getting the following error:  I checked the error logs and details are below: Could anyone ... See more...
Hi all, I'm to trying to set an email alert notification using Splunk enterprise 9.0 but I am getting the following error:  I checked the error logs and details are below: Could anyone help me to understand this and guide me in right direction to resolve this error? Thanks. Regards, Aish  
29-Mar-2023 04:56:35:PM: |CPU Utilization % Average ------- 11   Expected result: 11
Hello, following query is slow and processing a lot of data    environment=tesxt earliest=-0d@d (index=iis_openapi OR index=iis OR index=iis1 ) cs_method=POST | regex cs_uri_stem=(?i)"/account/v1... See more...
Hello, following query is slow and processing a lot of data    environment=tesxt earliest=-0d@d (index=iis_openapi OR index=iis OR index=iis1 ) cs_method=POST | regex cs_uri_stem=(?i)"/account/v1/login/forgot-password" |eval Hour=strftime(_time,"%H")|search Hour>=5 AND Hour<9| bin _time span=60s | stats count as RPM by _time | eval TPS=RPM/60 | stats max(TPS) as MaxTPS   Are there way to optimize this query ?   
Is it possible to limit a role to only have write access to an index?  For example I want a role to be able to do summary indexing via the collect command but I do not want them to have to be able t... See more...
Is it possible to limit a role to only have write access to an index?  For example I want a role to be able to do summary indexing via the collect command but I do not want them to have to be able to see what is in the index.
Im trying to create multiple fields names from the same based on condition that other values are met.  I need to do this multiple times in 1 search to create new field names For example; if eve... See more...
Im trying to create multiple fields names from the same based on condition that other values are met.  I need to do this multiple times in 1 search to create new field names For example; if event=av AND cmd=judgement then RENAME the field "result" to AV_Result if event=spam AND cmd=judgement then RENAME the field "result" to Spam_Result if action=quarantine AND mod=session AND cmd=kill then RENAME the field "Folder" to "Final_Folder_Result" Id like to do all this in 1 search
Hi Splunkers,  I wanted to create a new field name called "app_id" and send it along data while ingesting into Splunk. I came accross ingest-time eval option can do so. In my case, I want to ha... See more...
Hi Splunkers,  I wanted to create a new field name called "app_id" and send it along data while ingesting into Splunk. I came accross ingest-time eval option can do so. In my case, I want to have a field like"app_id" with its values extracted using from other fields (bolded in below ) using case condition. app_id = case(sourcetype=="aws:ecs:service:acid:stdout", mvindex(split(host,"-"),1), isnotnull('kubernetes.labels.applicationid'), 'kubernetes.labels.applicationid', isnotnull(applicationid) , applicationid, isnotnull(aws_account_id), aws_account_id, 1=1 , "NA") Is this a right way to add above case conditions in "Ingest_Eval" field in transforms.conf? Like, INGEST_EVAL= app_id=case(sourcetype=="aws:ecs:service:acid:stdout", mvindex(split(host,"-"),1), isnotnull('kubernetes.labels.applicationid'), 'kubernetes.labels.applicationid', isnotnull(applicationid) , applicationid, isnotnull(aws_account_id), aws_account_id, 1=1 , "NA") Is there any alternate solution on this? Please recommend. Thanks, Mala S
Hello, Can someone please help me with your inputs whenever the splunkd.exe and splunk-winevtlog.exe  goes down? looking to set up something automatically restart these services whenever they are do... See more...
Hello, Can someone please help me with your inputs whenever the splunkd.exe and splunk-winevtlog.exe  goes down? looking to set up something automatically restart these services whenever they are down.     Thanks
I need to change the format of the name of the .csv attachments on reports from my organization's saved searches. I've gone through the older solutions but they don't seem to be valid anymore. The g... See more...
I need to change the format of the name of the .csv attachments on reports from my organization's saved searches. I've gone through the older solutions but they don't seem to be valid anymore. The goal is when a report sends an email with a .csv attachment to have the timestamp be "$name$-$time:%Y-%m-%d-%H:%M:%S$" instead of just "$name$-$time:%Y-%m-%d". Neither setting the reportFileName in alert_actions.conf or adding an action.email.reportFileName line to the saved search itself is changing the name of the file. Is there a workaround for Splunk 9?
Im a splunk admin and I got asked to update the inputs.conf file for the app pingfederate. Im a little unsure of how to do it and I figured id ask here instead of bricking our prod system.  The requ... See more...
Im a splunk admin and I got asked to update the inputs.conf file for the app pingfederate. Im a little unsure of how to do it and I figured id ask here instead of bricking our prod system.  The request was : please modify inputs.conf for splunk agent to retrieve pingfederate log for the latest version.   source: [monitor://E:\pingfederate-11.2.3\pingfederate\log\] index=pingid sourcetype=pingidsrc disabled=false whitelist = audit.log$|server.log$|init.log$|transaction.log$|provisioner.log$ [monitor://E:\pingfederate-11.2.3-console\pingfederate\log\] index=pingid sourcetype=pingidsrc disabled=false whitelist = audit.log$|server.log$|init.log$|transaction.log$|provisioner.log$     Im good on that part. I found the inputs.conf on our Deployement server in 4 locations, two of which were backup locations.  We have the app's inputs.conf file under    1. /export/opt/splunk/etc/deployment-apps/pingfederate/default/inputs.conf 2. /export/opt/splunk/etc/deployment-apps/pingfederate/default/inputs.conf 3. /export/opt/splunk/etc/peer-apps-backup/pingfederate/default/inputs.conf 4. etc...     Here is where i get a bit confused. I remember reading through some docs about which folder is the main one that controls the apps, and one is just there kind of like a  backup. Im pretty sure Im supposed to change the file in local, and then push that, is that correct?  If so, what exactly is the default for? Is it some sort of failsafe in case the app acts up?    I saw somewhere about manager-apps or master-apps, but since im using the deployment server, it should be under deployment-apps. But all of the apps that are under deployment apps do not all appear under manage apps in the web version of splunk deployment. All of the apps that appear on the splunk deployment webpage are located in the /export/opt/splunk/etc/apps. I dont quite understand the difference here, and which directory is there for what reason.    And then, after i get that inputs.conf file sorted, how do i push it? Im not pushing an app, but would i do that shcluster bundle command if im not pushing an entire app?  SPLUNK_HOME/bin/splunk apply shcluster-bundle -target https://<any_member_SHC>:<mgmt_port> -auth admin:<password>  I found that command, but its run through the deployer, so im assuming thats not quite correct. (We have a clustered environment with two clusters of SH and indexers, btw).   Ive gotten a bit lost in the sauce reading all of the docs and they are all blending together. Id appreciate any input. Thank you for any help. 
Hi all, I have 2 servers  and each having 3 sources. I am able to receive logs from 2 sources  from 2 servers but not receiving logs from one source I checked there are logs on the server and n... See more...
Hi all, I have 2 servers  and each having 3 sources. I am able to receive logs from 2 sources  from 2 servers but not receiving logs from one source I checked there are logs on the server and no permission issues  How to troubleshoot???    
Hello Splunkers,  I am trying to color code cells based on the values of field 'execution_status'. Builds columns are dynamics, and not fixed, overtime more builds will appear. How do i dynamically c... See more...
Hello Splunkers,  I am trying to color code cells based on the values of field 'execution_status'. Builds columns are dynamics, and not fixed, overtime more builds will appear. How do i dynamically color code the cell with 'FAIL', 'PASS', 'ERROR'.    <row> <panel> <title>Results by Builds</title> <table> <search> <query>index=* source IN (*) | stats values(execution_status) as execution_status by test_case, build | streamstats first(execution_status) as execution_status by test_case, build | chart values(execution_status) by test_case, build</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">5</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row>   test case build1 build2 build3 build4 test1 PASS FAIL ERROR PASS test2 ERROR FAIL PASS FAIL test3 PASS ERROR PASS PASS test4 FAIL ERROR PASS ERROR test5 ERROR ERRO FAIL PASS test6 PASS ERRO PASS PASS
See title, I'm using a scheduled query to prune a set of results from a lookup table, this lookup table has over 2m results, but after the prune, it's truncated down to 50,000. This exclusively happe... See more...
See title, I'm using a scheduled query to prune a set of results from a lookup table, this lookup table has over 2m results, but after the prune, it's truncated down to 50,000. This exclusively happens when I schedule the lookup table with the "replace" option. Append works perfectly. Pruning script:     | inputlookup my_lookup.csv | where _time > relative_time(now(),"-6m")     Pruning schedule options:  I've tried setting the output location to both my_lookup.csv and to other lookups. In both cases, 50,000 results seems to be the limit for the replaced lookup table. Append schedule options: Any help is appreciated.
We have error messages like " Corrupt csv header in CSV file , 2 columns with the same name 'Severity" & CSV file contains invalid field ''. How do I find this? My SHC has hundreds of CSV files, so i... See more...
We have error messages like " Corrupt csv header in CSV file , 2 columns with the same name 'Severity" & CSV file contains invalid field ''. How do I find this? My SHC has hundreds of CSV files, so it is hard to find issues even with grep.
Hi I am working on a project where we are taking in Open Telemetry Data Metric. I am looking for a way to re-import Metric Data for testing and i have been looking at eventgen. I took some metr... See more...
Hi I am working on a project where we are taking in Open Telemetry Data Metric. I am looking for a way to re-import Metric Data for testing and i have been looking at eventgen. I took some metric data that I have and using mpreview i exported it to a file (Below - sample.metric4). I followed this great video - https://www.youtube.com/watch?v=WNk6u04TrtU     _raw,_time "{""cmd"":""python3"",""component.name"":""rtpm-probe"",""metric_type"":""Gauge"",""module.names"":""MONITORING"",""mx.env"":""dell967srv.scz.murex.com:15023"",""parent.name"":""AGENT.dell1013srv.scz.murex.com"",""pid"":""789937"",""replica.name"":""rtpm-probe-ORCH2"",""server.mode"":""N/A"",""service.name"":""monitoring"",""service.type"":""agent-based"",""telemetry.sdk.language"":""python"",""telemetry.sdk.name"":""opentelemetry"",""telemetry.sdk.version"":""1.12.0rc2"",""metric_name:mx.process.errors.status"":4}",2023-03-30T14:21:24.000+0200       Then using evengen i was able to import this data into an event index to test it worked - it did. I have been trying to  get it to work to sed it to a metric index, but no luck. I think i am close, but i just cant crack it. This is the eventgen.conf file i have been using. To note, i have tried multiple sourcetypes in relation to metrics, but i dont seem to be picking the correct one. Or the mpreview format is not ok!     [sample.metric4] mode=sample interval = 1m earliest = -2m latest = now count=-1 outputMode = metric_httpevent index = bcg_eventgen_metrics host = bcg_eventgen source = bcgames sourcetype=otel     This is the error I am getting     The metric event is not properly structured, source=bcgames, sourcetype=metrics_csv, host=bcg_eventgen, index=bcg_eventgen_metrics. Metric event data without a metric name and properly formated numerical values are invalid and cannot be indexed. Ensure the input metric data is not malformed, have one or more keys of the form "metric_name:<metric>" (e.g..."metric_name:cpu.idle") with corresponding floating point values. 3/30/2023, 5:20:02 PM     When I change the eventgen.conf to send to an event index i can see the data. I just cant figure out how to get it to go to metric data. Perhaps by outputmode is not correct. Thanks in advance for any help
We have a transform to apply which sends events to nullQueue under certain conditions.  We would like to initially whitelist only a couple hosts for deployment so we dont have to find out the hard wa... See more...
We have a transform to apply which sends events to nullQueue under certain conditions.  We would like to initially whitelist only a couple hosts for deployment so we dont have to find out the hard way we missed something applying to 40K + hosts. What is the recommended way to do this?  I saw the destkey for renaming hosts based on patterns but not a way to filter at the specific transform key level.
Is something like this possible?    index=main sourcetype=iis host IN (| inputlookup serverlistA.csv)    I think the problem may be that inputlookup is a generating command and IN is evalu... See more...
Is something like this possible?    index=main sourcetype=iis host IN (| inputlookup serverlistA.csv)    I think the problem may be that inputlookup is a generating command and IN is evaluated before the inputlookup is done.  I am looking for another way to do something similar.  This is what I currently do   country IN (Afghanistan Albania Algeria Andorra ... 187 more ... Vietnam Yemen Zambia Zimbabwe)   The countries are just an example.  I have dozens of various size dynamic lists that  I need to check in different searches.                
Hi folks, I'm analysing Cisco CallManager telephone call details records that have  been ingested to Splunk. I need to find the extensions that make and receive a minimum number of external calls per... See more...
Hi folks, I'm analysing Cisco CallManager telephone call details records that have  been ingested to Splunk. I need to find the extensions that make and receive a minimum number of external calls per month, every month, in the last calendar year to evaluate who should continue to receive PSTN telephone service. I've managed to identify external calls, classify their callType (in a new field) as Incoming or Outgoing, and create a new field in each event called "activeNumber" that represents the internal number making or receiving the call. Using | chart count by activeNumber date_month I can see a chart of calls per number per month.   Using | stats count by date_month activeNumber | where count > 10  I can see the same data, but only where the count for any given month is greater than 10.   I'm stuck where to go next, though, and I suspect this may not have been entirely the correct route to take. I need to find the activeNumber values that appear at least 10 times *every* month within the search period (12 months, for example). I toyed with the idea of concatenating the month and the activeNumber into a new field in each event that made the activeNumber's appearance in each month unique, but, again, didn't know where to go from there. My other idea was to make a list of activeNumber values for each month and then compare the lists, but wasn't sure how to do that. I suspect that a subsearch may be necessary. Does anyone have an idea how I'd go about this?
Hi, I have a graph with almost 100 values plotted where the graph looks like the below image. We expect the graph to be in a curve drawn in red with some values. Do we have option to do curve smoot... See more...
Hi, I have a graph with almost 100 values plotted where the graph looks like the below image. We expect the graph to be in a curve drawn in red with some values. Do we have option to do curve smoothing/filtering in splunk.