All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have a report from a system that needs to be indexed into splunk on monthly basis. This report is generated on 1st day of every month.  Our requirement is to index events in this report on last d... See more...
We have a report from a system that needs to be indexed into splunk on monthly basis. This report is generated on 1st day of every month.  Our requirement is to index events in this report on last day of previous month. So i want to index data from this report with timestamp of previous day. Is this possible?
I want to set default value of token to be * on the drilldown before  clicking the field value. $COUNT$= "*" > On clicking $click.value$= field value @kamlesh_vaghela  <row> <panel> <viz type="t... See more...
I want to set default value of token to be * on the drilldown before  clicking the field value. $COUNT$= "*" > On clicking $click.value$= field value @kamlesh_vaghela  <row> <panel> <viz type="treemap_app.treemap"> <title>Top 15 Result</title> <search base="health_rules"> <query>|stats count AS "Result Count" BY count value |head 15</query> </search> <drilldown> <set token="VALUES">$row.value$</set> <set token="COUNTS">$row.count$</set> </drilldown> </viz> </panel> </row>  
Hi , I have been trying to get data from ListViewEvent form salesforce through "Inputs" in "Splunk Add on for Salesforce". However the data from the BigObject  table never arrives into splunk. Is t... See more...
Hi , I have been trying to get data from ListViewEvent form salesforce through "Inputs" in "Splunk Add on for Salesforce". However the data from the BigObject  table never arrives into splunk. Is there a setting that needs to be done in splunk. I have used order by as  "EventDate DESC"
Hi Folks, I need your help in fetching latest event from a particular field. Sharing you a sample event  and query when I execute for last 15 mins. Query -> index=Blah sourcetype=blah_blah* Examp... See more...
Hi Folks, I need your help in fetching latest event from a particular field. Sharing you a sample event  and query when I execute for last 15 mins. Query -> index=Blah sourcetype=blah_blah* Example event :- 2020-11-02 05:35:00.319, SOURCE="Tullett", COUNTVOL="879", TO_CHAR(SNAPTIME,'MM/DD/YYHH24:MI:SS')="08/31/20 00:59:00"   Initial date on this event seems to be OK which is todays date"2020-11-02 05:35:00.319", but date at the end which is field SNAPTIME_NEW seems to be old "08/31/20 00:59:00". Can you please help me with a query so that I see only "todays" events in a sorted manner by date in field SNAPTIME_NEW when I execute query for last 15 mins.    Screenshot attached.   Thanks,  Prateek
Hello,    Splunk newbie here. I have a CSV file with a bunch of hostnames titled 'Device' that I added as a lookup 'hostnames.csv'. I have an index that has ComputerName, User, and a bunch of other... See more...
Hello,    Splunk newbie here. I have a CSV file with a bunch of hostnames titled 'Device' that I added as a lookup 'hostnames.csv'. I have an index that has ComputerName, User, and a bunch of other fields. I want the Index data to enrich my csv data by adding the User that corresponds to the hostname. I will then export back to csv to hand the data to someone else.  Does anyone have some pointers so I can achieve this?   I was looking at other similar posts, but I couldn't figure out if I need append, outputlookup, join or something else. This is what I have so far.    |inputlookup lookup.csv | append [ search index=data source=Source1 Code=22] | rename Device as ComputerName | table ComputerName user_email 
Hello, i have a windows machine(Windows Version 10) which is configured to send data to a indexer. but data is not sent to indexer. while checking splunkd log, i see a warning  WARN IniFile - C:\Pr... See more...
Hello, i have a windows machine(Windows Version 10) which is configured to send data to a indexer. but data is not sent to indexer. while checking splunkd log, i see a warning  WARN IniFile - C:\Program Files\SplunkUniversalForwarder\etc\apps\<app_name>\local\inputs.conf, line 1: Cannot parse into key-value pair: ?#################################### there is this as issue in inputs conf which is corrected now. but could it be the issue which blocks the UF to send data to indexer?    
I am trying to monitor the log file and index to Splunk with the following log format. 02/11/2020,16:09:02,test-xxxxx,DISCONNECT .... The date format is in DD/MM/YYYY, I added the following stanza ... See more...
I am trying to monitor the log file and index to Splunk with the following log format. 02/11/2020,16:09:02,test-xxxxx,DISCONNECT .... The date format is in DD/MM/YYYY, I added the following stanza in the $SPLUNK/etc/system/local/props.conf of the indexer  [testsourcetype] TIME_FORMAT = %d/%m/%Y,%H:%M:%S However the log still not able to be indexed to Splunk, are there anything I missed?   Thank you  
I'm attempting to set up my AWS Elastic Beanstalk instance to also run Splunk Universal Forwarder on it and forward data to my Splunk Cloud account. I am roughly following this guide: https://tech.sm... See more...
I'm attempting to set up my AWS Elastic Beanstalk instance to also run Splunk Universal Forwarder on it and forward data to my Splunk Cloud account. I am roughly following this guide: https://tech.smartling.com/logs-collection-from-aws-elasticbeanstalk-splunk-7edd0348bc96 with some changes to the .ebextensions file given. I know it's using an older version of universal forwarder, so the admin:changeme login doesn't work, but I went on to this page: https://docs.splunk.com/Documentation/Splunk/7.1.0/Security/Secureyouradminaccount#Create_a_password_when_starting_Splunk_for_the_first_time and followed that by creating a user-seed.conf file with a random password, I even added a cat on that file and it printed out the correct information. However, I'm still getting the "No users exist. Please set up a user." error. Does anyone have any ideas? Here's my actual .ebextensions file:     container_commands: 01install-splunk: command: /usr/local/bin/install-splunk.sh 02set-splunk-outputs: command: /usr/local/bin/set_splunk_outputs.sh env: SPLUNK_SERVER_HOST: "instance.splunkcloud.com:9997" 03add-inputs-to-splunk: command: /usr/local/bin/add-inputs-to-splunk.sh env: ENVIRONMENT_NAME: "Development" cwd: /root ignoreErrors: false files: "/usr/local/bin/install-splunk.sh": content: | #!/usr/bin/env bash /usr/bin/wget "https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=8.1.0&product=universalforwarder&filename=splunkforwarder-8.1.0-f57c09e87251-linux-2.6-x86_64.rpm&wget=true" -O /usr/src/splunk-universal-forwarder.rpm /bin/rpm -i /usr/src/splunk-universal-forwarder.rpm if [[ -z $(pgrep splunk) ]];then /opt/splunkforwarder/bin/splunk start --answer-yes --no-prompt --accept-license fi mode: "000755" "/opt/splunkforwarder/etc/system/local/outputs.conf": content: | [tcpout] defaultGroup = splunkLogs disabled = false [tcpout:splunkLogs] server = splunk_server_host [tcpout-server://splunk-server-host:9997] mode: "000644" "/usr/local/bin/set_splunk_outputs.sh": content: | #!/usr/bin/env bash if [[ -z $SPLUNK_SERVER_HOST ]];then echo "$0: Cannot find splunk server host." exit 1 fi outputs_file="/opt/splunkforwarder/etc/system/local/outputs.conf" if [[ -e $outputs ]];then chown splunk.splunk $outputs cp -f $outputs_file $outputs_file.orig sed -i "s/splunk_server_host/$SPLUNK_SERVER_HOST/g" $outputs if [[ -n $(diff $outputs_file $outputs_file.orig) && -n $(pgrep splunk) ]];then /opt/splunkforwarder/bin/splunk restart fi fi mode: "000755" "/opt/splunkforwarder/etc/system/local/user-seed.conf": content: | [user_info] USERNAME = admin PASSWORD = "fdsajigoqpkmgas" "/usr/local/bin/add-inputs-to-splunk.sh": content: | #!/usr/bin/env bash application_name=$ENVIRONMENT_NAME instance_name=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) splunk_logs_hostname="$application_name/$instance_name" wget "https://bucket.s3.amazonaws.com/splunkclouduf.spl" -O /usr/src/splunk-credentials.spl export HOME=/root /opt/splunkforwarder/bin/splunk install app /usr/src/splunk-credentials.spl -auth admin:"fdsajigoqpkmgas" /opt/splunkforwarder/bin/splunk login -auth admin:"fdsajigoqpkmgas" /opt/splunkforwarder/bin/splunk add monitor "/tmp/logs/stacktrace.log" -hostname "$splunk_logs_hostname" -sourcetype log4j mode: "000755"    
  I am looking for SPL, that can give me list of all the knowledge Objects, created in last 24 hours, in search app. I looked at the below rest SPL, but i did not see creation time.  | rest /servi... See more...
  I am looking for SPL, that can give me list of all the knowledge Objects, created in last 24 hours, in search app. I looked at the below rest SPL, but i did not see creation time.  | rest /servicesNS/-/search/directory
Hi All, My question is the same as the title. How am I able to index Json array into metric index? I would appreciate if anyone would help me out. Thanks!! Json sample: { "MachineName": "East1", "T... See more...
Hi All, My question is the same as the title. How am I able to index Json array into metric index? I would appreciate if anyone would help me out. Thanks!! Json sample: { "MachineName": "East1", "Temperature": [ 10, 12, 13, 10, 11, 12, 14, 9, 12, 16,................18, 11, 13, 11, 10, 8] } { "MachineName": "East2", "Temperature": [ 10, 12, 14, 9, 12, 16, 14, 9, 12, 16,................18, 10, 12, 13, 10, 11,] } { "MachineName": "East1", "Temperature": [ 10, 12, 10, 12, 13, 10,  14, 9, 12, 16,................ 14, 9, 12, 16, 10, 8] } { "MachineName": "East2", "Temperature": [ 14, 9, 12, 16, 11, 12, 14, 9, 12, 16,................18, 11, 13, 11, 10, 8] }
Hi all! I have this query which gets me the list of hosts stuff stuff stuff | rename host as host_changed | dedup host_changed | table host_changed it works beautifully.   Now I have this other q... See more...
Hi all! I have this query which gets me the list of hosts stuff stuff stuff | rename host as host_changed | dedup host_changed | table host_changed it works beautifully.   Now I have this other query | mstats prestats=true avg(load.*) WHERE (`sai_metrics_indexes`) AND host=lalalala by host span=1m | timechart span=1m avg(load.longterm) AS Longterm by host which also works perfectly Now, what I want to do, it effectively combine the two, but I cannot seem to get the syntax right | mstats prestats=true avg(load.*) WHERE (`sai_metrics_indexes`) AND host in [search stuff stuff stuff | rename host as host_changed | dedup host_changed | table host_changed] by host span=1m | timechart span=1m avg(load.longterm) AS Longterm by host Thoughts?  Thanks!  
Hi Splunkers, I am using a choropleth map. How to add another row of fields on the tooltip. Example:  Country: Texas Year: 2019 Market_Segments: 100,000 Correlation_Label_Specific The sear... See more...
Hi Splunkers, I am using a choropleth map. How to add another row of fields on the tooltip. Example:  Country: Texas Year: 2019 Market_Segments: 100,000 Correlation_Label_Specific The search I use is:  | inputlookup global_merge_full_2019.csv | stats sum(Production) as Production by Country, Year, Market_Segments, Correlation_Label_Specific | geom geo_countries featureIdField=Country | fields Year, Market_Segments, Correlation_Label_Specific, Country, Production,featureCollection,geom Thank you in advance,  Best, Evelyn Li  (similar post here:https://community.splunk.com/t5/All-Apps-and-Add-ons/choropleth-map-tooltip/m-p/428733) 
we set the buckets to roll from hot into cold for 90 days, but for some reason is not doing it and we are running low on space. How can I manually move buckets from to cold to free space   Thank yo... See more...
we set the buckets to roll from hot into cold for 90 days, but for some reason is not doing it and we are running low on space. How can I manually move buckets from to cold to free space   Thank you    
I am trying to get a distinct count of tacking id from all of our production indexes. The issue I am running into is that for internal indexes my field of interest is named "trackingid" and for exter... See more...
I am trying to get a distinct count of tacking id from all of our production indexes. The issue I am running into is that for internal indexes my field of interest is named "trackingid" and for external indexes the field is named "trackingId".  I have tried several things and can only get values for either internal or external, and or both in separate columns. I cannot get both fields renamed as "tid". Which would then be split by region based on host.
Hi team, I'm am just trying to import the Appd plugin inside an android library: apply plugin: 'com.android.library' apply plugin: 'adeum' // this line added for AppDynamics But always ... See more...
Hi team, I'm am just trying to import the Appd plugin inside an android library: apply plugin: 'com.android.library' apply plugin: 'adeum' // this line added for AppDynamics But always with this error: A problem occurred configuring the project ':mylibrary'. > Failed to notify the project evaluation listener. > Transforms with scopes '[EXTERNAL_LIBRARIES, SUB_PROJECTS]' cannot be applied to library projects. > Cannot cast object 'extension 'android'' with class 'com.android.build.gradle.LibraryExtension_Decorated' to class 'com.android.build.gradle.AppExtension' I tried with a bunch of versions of Gradle and the plugin without a different outcome: dependencies { classpath 'com.android.tools.build:gradle:4.0.0' classpath 'com.appdynamics:appdynamics-gradle-plugin:20.7.1' // this line added for AppDynamics From the 3.0.X of Gradle till the last one 4.1.0 (October 2020 ) From the plugin version --- 4.5.X till 20.10.2 Hope someone can help me with this issue ^ Edited by @Ryan.Paredez. I want to note this post was split off from this conversation: https://community.appdynamics.com/t5/End-User-Monitoring-EUM/Does-the-adeum-plugin-supports-library-extension/m-p/36723#M1106
I have see a few older questions on something like this but nothing too new. I have a table right now we manually generate using a number of searches; getting data for the past few weeks. Using this ... See more...
I have see a few older questions on something like this but nothing too new. I have a table right now we manually generate using a number of searches; getting data for the past few weeks. Using this as a way to identify trends and find items with low volume or long response times. Using predict would greatly reduce this need but, I would need a split by clause. In Short I am getting something like _time Volume low(predicted(Volume) high(predicted(Volume) ResponseTime low(predicted(ResponseTime) high(predicted(ResponseTime)   What I would like is. Operation Volume low(predicted(Volume) high(predicted(Volume) ResponseTime low(predicted(ResponseTime) high(predicted(ResponseTime) op1 10 4 15 9 5 15 op2 5 2 9 5 1 10   I am tabling the data so I would only have one entry for each operation.  I want it to show an overview of the operations using the predicted values for context and then I would create formatting if they are outside some bounds of the predicted value.
Hi all, Sorry for the really newb question (because I am one). I have Splunk Enterprise running on my standalone PC to evaluate it. I have managed to get Splunk to monitor my PC's volumes, direct... See more...
Hi all, Sorry for the really newb question (because I am one). I have Splunk Enterprise running on my standalone PC to evaluate it. I have managed to get Splunk to monitor my PC's volumes, directories & files OK...but the **bleep** thing insists on trying to index the contents of every file too. Obviously, this is very resource-intensive and I really don't want the file contents indexed. How do I stop it indexing the file contents? Or, even better, tell it not to index any file contents except for specific file extensions? Thanks for any help.
Hi, I'm relatively new to Splunk. I'm building searches for mcollect to parse and store metrics into a metric sindex. My intention is to later use the metrics to train ML for alerting. I have a set ... See more...
Hi, I'm relatively new to Splunk. I'm building searches for mcollect to parse and store metrics into a metric sindex. My intention is to later use the metrics to train ML for alerting. I have a set of endpoints where I have hit counts for each endpoint, and average response time for the endpoint, sliced into 5 minute intervals. At specific times of day I might have zero hits on a specific endpoint. Importantly I don't have "missing data" here, there were legitimately no hits at certain times. I'm successfully using timechart | fillnull value=0 | untable to make sure I have a count for each endpoint for each timeslice. I understand not having gaps is important for at least some of the ML algorithms.  Where I'm uncertain is the response time values. It seems incorrect to say that the endpoint responded in 0ms during a timeslice where there were no hits, and that this could skew things since it will never be 0ms when there is any hit. I could use fillnull value=NULL for these values, which seems more "correct". However I'm unclear if I'm going to regret those null values later when I get into ML. What is best practice for fillnull when you're backfilling performance values? My search so far, note I need to end with _time, metric_name, _value for mcollect. index=my_index earliest="-1d@d" latest="@d" host="prod*" "MYSTRING|*" | eval all=split(_raw,"|") | eval Application=mvindex(all,2) | eval Service=mvindex(all,4) | eval Actual=mvindex(all,8) | eval metric_name=Application.".".Service.".actual.avg" | bin _time span=5m | stats avg(Actual) AS _value BY _time metric_name | eval _value=round(_value) | timechart limit=0 span=5m min(_value) AS _value by metric_name | fillnull value=NULL | untable _time metric_name _value | mcollect index=my_index_metrics        
We created an application (mostly Javascript, with a proxy server for communicating to Splunk) that, while running outside of Splunk,  accesses Splunk Enterprise data and displays Splunkjs visualizat... See more...
We created an application (mostly Javascript, with a proxy server for communicating to Splunk) that, while running outside of Splunk,  accesses Splunk Enterprise data and displays Splunkjs visualizations (using the docs at https://dev.splunk.com/enterprise/docs/developapps/visualizedata/usesplunkjsstack/ ). Unfortunately we seem to be running into load-related issues when several users simultaneously access it, as follows: 1 - Incomplete chart displays ('waiting for data'  forever) in a couple of charts that happen to depend on the completion of saved searches for their own (post process) search to get started. 2 - Other times, this error appears in the Javascript console and none of the charts load:        Uncaught Error: Load timeout for modules: splunkjs/ready!_unnormalized2,splunkjs/ready!http://requirejs.org/docs/errors.html#timeout    at makeError (eval at module.exports (config.js:138), <anonymous>:166:17)    at checkLoaded (eval at module.exports (config.js:138), <anonymous>:692:23)    at eval (eval at module.exports (config.js:138), <anonymous>:713:25) NOTE: these issues don’t come up unless there are various simultaneous (10 or so is all that it requires) loads of the application, which is why we are theorizing that they are load-related issues. In addition, our Splunk admin was present when we tested this app, and he did not see the relevant limits being reached on the back end (e.g. number of simultaneous searches, space used by the single service user employed in this application) . Could somebody please advise on what could be happening? Are there limitations specific to the Splunkjs stack that we should be aware of?     
Certain events in these logs have dates in certain tags below such as <BeginDateTime> and <EndDateTime> . They are creating additional events when they should only be one. Same thing on JMS Timestamp... See more...
Certain events in these logs have dates in certain tags below such as <BeginDateTime> and <EndDateTime> . They are creating additional events when they should only be one. Same thing on JMS Timestamp on event 2 pictured below. What would the correct regex be to only make events when the timestamp is first like in 1, 5-9. Thank you.