All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I recently started up my Splunk instance in AWS and I have all the necessary apps installed on it to monitor my AWS environment. In the Splunk Add-On for AWS I set up my Config input and for some rea... See more...
I recently started up my Splunk instance in AWS and I have all the necessary apps installed on it to monitor my AWS environment. In the Splunk Add-On for AWS I set up my Config input and for some reason Splunk is failing to ingest my config files from my S3 bucket. Does anyone have any suggestions on how to fix this?   Thanks
Hello Splunkers, I'm wondering the best way to index an email. Not email server logs, the actual mail. There are a couple apps that maybe help with this but they are very old: https://splunkbase.s... See more...
Hello Splunkers, I'm wondering the best way to index an email. Not email server logs, the actual mail. There are a couple apps that maybe help with this but they are very old: https://splunkbase.splunk.com/app/3200/ https://splunkbase.splunk.com/app/1739/ Has anyone already did this? Any advice? Christian
Hi  I am working on External lookup, below is my code new.py import csv import os,sys import subprocess import requests import sys import json infile = sys.stdin outfile = sys.stdout r = cs... See more...
Hi  I am working on External lookup, below is my code new.py import csv import os,sys import subprocess import requests import sys import json infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) result = 0 new_fieldnames = ["clientip", "fraud_score", "country_code", "success"] w = csv.DictWriter(outfile, fieldnames=new_fieldnames) w.writeheader() apiURL = "my-api" clientip = sys.argv[1] URL = apiURL + clientip r = requests.get(URL) data = r.json() result = {"clientip":str(data["host"]),"fraud_score": str(data["fraud_score"]), "country_code":str(data["country_code"]), "success":str(data["success"])} w.writerow(result) Above code is giving output as below:- /opt/splunk/bin/splunk cmd python /opt/splunk/etc/apps/TA-test/bin/new.py 172.168.0.2 clientip,fraud_score,country_code,success 172.168.0.2,75,US,True
I am trying to run a query where it compares a search result field against a field in the lookup table. I was able to get it working, but then I am trying to also show the corresponding field with th... See more...
I am trying to run a query where it compares a search result field against a field in the lookup table. I was able to get it working, but then I am trying to also show the corresponding field with that object that is located in the lookup table.   This is what I have so far index=zscaler sourcetype="zscaler:syslog:zscaler_web_policy" [| inputlookup "riskiq_last_status" | return 1000 $name] |table url status It is just matching the name field in the lookup table to the url field in the index search query. I am guessing the status field is blank because there isnt a status field in the index search results.... How do I add a a field in the lookup table to the search query results?
Splunk Enterprise - Windows - 8.0.5 I have tried to enable the HTTP Event Collector following this guideline https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/UsetheHTTPEventCollector - made s... See more...
Splunk Enterprise - Windows - 8.0.5 I have tried to enable the HTTP Event Collector following this guideline https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/UsetheHTTPEventCollector - made sure that HEC is enabled and then created a token.    [http://MyScript] disabled = 0 index = operations indexes = operations token = b68999b2-9f22-4b53-ba6e-0a8cfd505251 useACK = 0 description = HTTP EVent collector for script   From file    D:\Splunk\etc\apps\search\local\inputs.conf   Server restarted - but still    curl -k "https://splunkindex:8088/services/collector" {"text":"The requested URL was not found on this server.","code":404}   So whatever I do trying to post an event - fails    curl -k "https://splunkindex:8088/services/collector" -H "Authorization: Splunk b68999b2-9f22-4b53-ba6e-0a8cfd505251" -d '{"event": "Hello, world!", "sourcetype": "manual"}' {"text":"Invalid data format","code":6,"invalid-event-number":0}curl: (3) URL using bad/illegal format or missing URL curl: (6) Could not resolve host: sourcetype curl: (3) unmatched close brace/bracket in URL position 7: manual}'   but at least something is working    curl -k "https://splunkindex:8088/services/collector/health" {"text":"HEC is healthy","code":17}   Did also try    |rest /services/collector/health   but that fails - so I have not fully understod the "| rest" command.  Finally, ref https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/HTTPEventCollectortokenmanagement it says I can list the existing tokens using command    curl -k -u admin:password https://splunkindex:8089/servicesNS/admin/splunk_httpinput/data/inputs/http   But I cannot see any refrences to my token in the output.   <?xml-stylesheet type="text/xml" href="/static/atom.xsl"?> <feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>http</title> <id>https://splunkindex:8089/servicesNS/admin/splunk_httpinput/data/inputs/http</id> <updated>2020-08-26T21:40:06+02:00</updated> <generator build="a1a6394cc5ae" version="8.0.5"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/admin/splunk_httpinput/data/inputs/http/_new" rel="create"/> <link href="/servicesNS/admin/splunk_httpinput/data/inputs/http/_reload" rel="_reload"/> <link href="/servicesNS/admin/splunk_httpinput/data/inputs/http/_acl" rel="_acl"/> <opensearch:totalResults>0</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> </feed>    
Hello. We are currently looking to utilize Splunk for monitoring a few configuration files on a server.  To do that, we have added the following stanza to the inputs.conf file ... [fschange:C:\Splu... See more...
Hello. We are currently looking to utilize Splunk for monitoring a few configuration files on a server.  To do that, we have added the following stanza to the inputs.conf file ... [fschange:C:\Splunk test\Test File 1.txt] pollPeriod = 30 signedaudit = false index = <index_name> sourcetype = test:file:1 fullEvent=true When searching using the following SPL search ... index=<index_name> | head 10 | table path action modtime host source sourcetype The data ingested looks good.  That is, we are getting details on when the change occurred as well as the changes made from within the file.  But one field seems to be missing.  Using the "fschange" stanza, how can we also get "user" data (ie. who was the one making the update)? Regards, Max
    | inputlookup system_trending.csv | search system = "bob" |table date “System Score) | fieldformat date = strftime(date, "%m/%d/%Y")   The above search creates the following statistics ta... See more...
    | inputlookup system_trending.csv | search system = "bob" |table date “System Score) | fieldformat date = strftime(date, "%m/%d/%Y")   The above search creates the following statistics table for a selected system named “Bob”:   date Score 05/20/2020 10 06/20/2020 12 07/20/2020 10 08/20/2020 20   How do I make this into a single value panel with spark and trend line? If I just select the single value panel from the visualizations it just shows the date of “05/20/2020.”  If I use "| timechart max(date)"  I get 0 results.  Any ideas?  Thanks!
We started recently to read the Oracle audit tables and since the DBAs say that their Oracle instances are maxed out, resource-wise, I wonder whether Splunk keeps read time statistics of the DB read ... See more...
We started recently to read the Oracle audit tables and since the DBAs say that their Oracle instances are maxed out, resource-wise, I wonder whether Splunk keeps read time statistics of the DB read commands, or any other performance metrics.
Hi  I tried wget command to install splunk on azure cloud Linux VM. I found it is not installed in gzip format. After running wget it gives me splunk-8.0.5-a1a6394cc5ae-linux-2.6-amd64.deb file but... See more...
Hi  I tried wget command to install splunk on azure cloud Linux VM. I found it is not installed in gzip format. After running wget it gives me splunk-8.0.5-a1a6394cc5ae-linux-2.6-amd64.deb file but I am unable to tar that package. I am assuming during installation package was not downloaded in .tar extension. When I run  tar xvzf splunk_package_name.tgz It says No such file or directory. Seems it is not in gzip format. Can someone help me out on this!!  
has anyone written a bash script to install splunkforwarder on a linux server? or is it impossible due to having to enter admin and password and also having to use different users while installing ?
Hi, Let's say I can get this table using some Splunk query. id stages 1 key1,100 key2,200 key3,300  2 key1,50 key2,150 key3,250 3   key1,150 key2,250 key3,350   Give... See more...
Hi, Let's say I can get this table using some Splunk query. id stages 1 key1,100 key2,200 key3,300  2 key1,50 key2,150 key3,250 3   key1,150 key2,250 key3,350   Given this data I want the result, that is I want to reduce (average) over the keys. key avg key1 100 key2 200 key3 300   I tried to use mvexpand for this but Splunk runs out of memory and the results get truncated. So I want something more like a reduce function that can accumulate this mv field by key. Is this possible to do through a splunk query? Here is what I have tried:   `data_source` | fields id, stages{}.name as stage_name, somejson{}.duration as stage_duration | eval stages=mvzip(stage_name, stage_duration) | eval stages=mvfilter(match(stages, "key*")) | mvexpand stages | eval stages=split(stages, ",") | eval stage_name=mvindex(stages,0) | eval stage_duration=mvindex(stages,1) | stats avg(stage_duration) by stage_name   I want to do something more efficient than `mvexpand stages` that helps me do the reduction without blowing up memory.
I am running version 3.0.1 of the Microsoft Azure Add-on for Splunk on a single Splunk Enterprise server running version 8.0.1 in a Ubuntu Linux physical server.  Sometime in the past two weeks the A... See more...
I am running version 3.0.1 of the Microsoft Azure Add-on for Splunk on a single Splunk Enterprise server running version 8.0.1 in a Ubuntu Linux physical server.  Sometime in the past two weeks the Azure add-on stopped pulling in data.  Splunk is displaying an error message that states:  Unable to initialize modular input "azure_virtual_network" defined in the app "TA-MS-AAD": Introspecting scheme=azure_virtual_network: script running failed (exited with code 1)..  I've tried restarting Splunk and the server as well as overwriting the app with a fresh download from Splunkbase. 
I'm having an issue with getting the Universal Splunk forwarder to reach out to the deployment server.  I have 12 servers that I've configured all the same way and 8 of them are working properly but ... See more...
I'm having an issue with getting the Universal Splunk forwarder to reach out to the deployment server.  I have 12 servers that I've configured all the same way and 8 of them are working properly but for some reason these last 4 will not reach out.  It's not a firewall issue as I can telnet to 8089 to the deployment server without issue and all of the servers have an entry in the serverlist.conf file on the deployment server.  In each server we are seeing this in the splunk.d logs 08-26-2020 11:18:32.341 -0500 DEBUG DC:DeploymentClient - Creating a DeploymentClient instance 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - Setting : disabled=false 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - Setting : workingDir=c:\Program Files\SplunkUniversalForwarder\var\run 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - Setting : clientName=DDF77B18-237A-4753-B250-BC8D91C28FF4 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - Setting : repositoryLocation=c:\Program Files\SplunkUniversalForwarder\etc\apps 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - Setting : serverRepositoryLocationPolicy=acceptSplunkHome 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - Setting : serverEndpointPolicy=acceptAlways 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - Setting : maxRetries=3 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - Setting : waitInSecsBetweenRetries=60 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - Setting : phoneHomeIntervalInSecs=60 08-26-2020 11:18:32.356 -0500 INFO DC:DeploymentClient - target-broker clause is missing. 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - Setting : endpoint=$deploymentServerUri$/services/streams/deployment?name=$tenantName$:$serverClassName$:$appName$ 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - Setting : reloadDSOnAppInstall=false 08-26-2020 11:18:32.356 -0500 WARN DC:DeploymentClient - DeploymentClient explicitly disabled through config. 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - trace 1 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - trace 2 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - trace 3 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - trace 4 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - trace 5 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - trace 6 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - trace 7 08-26-2020 11:18:32.356 -0500 DEBUG DC:DeploymentClient - trace 8 08-26-2020 11:18:32.356 -0500 INFO DS_DC_Common - Deployment Client not initialized. 08-26-2020 11:18:32.356 -0500 INFO DS_DC_Common - Deployment Server not available on a dedicated forwarder.   Our deployment.conf is in the correct place and it explicity has disabled set to false under the [deployment-client] heading.  I've uninstalled and installed the forwarder multiple times and restarted the services on the deployment server.  It just somehow thinks the Deployment client is disabled by default in the config on these 4 servers.
I'm not sure if there is an answer to this question but as of right now, I'm using fieldsummary to get a better understanding of my data and specific fields in my data. Buuut, that's about where my f... See more...
I'm not sure if there is an answer to this question but as of right now, I'm using fieldsummary to get a better understanding of my data and specific fields in my data. Buuut, that's about where my fieldsummary journey ends. Are there any other interesting ways you use fieldsummary?
Hi , Below is the existing query but when i run this for a single index  with its fields i get the  statistics data & count but when  i am trying to run all these 4 -indexes and it's respective fiel... See more...
Hi , Below is the existing query but when i run this for a single index  with its fields i get the  statistics data & count but when  i am trying to run all these 4 -indexes and it's respective fields together i am not  getting any Statistical table as output what changes do i need to make to list out all 4 indexes with their respective count in statistics. ..(For single index and it's fields it's working fine but when i try to merge all 4 index details it's statistics value is 0 as you can see below.)  
First off, I am very new to Splunk and that may be my downfall. Our latest Splunk guru has left and this fell to me rather abruptly, so I apologize in advance. I have been tasked with generating a r... See more...
First off, I am very new to Splunk and that may be my downfall. Our latest Splunk guru has left and this fell to me rather abruptly, so I apologize in advance. I have been tasked with generating a report showing users that are logging into the local computers with elevated privileges of their standard daily accounts. For example, if a user has two logins, username and ADUserName. I need to find out if username is a local admin on their computer and when they have logged in using that account. I have been trying to figure this out but for the last two weeks haven't actually made any progress. Hoping someone can point me in the right direction - thank you very much!
Hi All, I have a search string like below:  index=qrp STAGE IN ("*_LDD",TRADE_EVENT,SOPHIS_TRANS,SOPHIS_INSTR,ORDER_EVENT) | timechart useother=f span=1h sum(TRADES) as "TradeCount" by ODS_SRC_SYS... See more...
Hi All, I have a search string like below:  index=qrp STAGE IN ("*_LDD",TRADE_EVENT,SOPHIS_TRANS,SOPHIS_INSTR,ORDER_EVENT) | timechart useother=f span=1h sum(TRADES) as "TradeCount" by ODS_SRC_SYSTEM_CODE | fillnull value=0 And the result screenshot is below. The AR1, BE1 ect are source system codes and the numerical values for each source system in the rows are the aggregate trade counts for respective source system at that time span starting from 00:00:00 hours.   I want to schedule an alert at 8 a.m in the morning which should get triggered when the whole summation of the trade aggregates values from 00:00:00 hours till 08:00:00 hours for each source system is not more that threshold 2000.  For example, in the above screenshot, we can see the summation of trades for source system MXT is 1030 between 00:00:00 till 08:00:00 hours which is below threshold and alert email notification should be sent. Could you please help with how to set up my search string and what should be my alert triggering condition? Thanks in advance. Your help is much appreciated. 
Hey All - I ran into another issue using the AppInspect API. I'm attempting to follow the "Submit an app for validation" step listed here - https://dev.splunk.com/enterprise/docs/developapps/testvali... See more...
Hey All - I ran into another issue using the AppInspect API. I'm attempting to follow the "Submit an app for validation" step listed here - https://dev.splunk.com/enterprise/docs/developapps/testvalidate/appinspect/splunkappinspectapi/runappinspectrequestsapi/. However, I end up getting the below error. For the sake of simplicity and security, I shortened the token ID (which I received after following the first step to authenticate to the REST API endpoint using my Splunk credentials) to a few characters.Any idea where I could have gone wrong? curl -X POST \ -H "Authorization: bearer u8dhr7g4a" \ -H "Cache Control: no-cache" \ -F "app_package=@\"/opt/splunk/etc/apps/learned\\""\ --url "https://appinspect.splunk.com/v1/app/validate" Here is the error that I encountered: curl: (3) Host name ' -H' contains bad letter curl: (3) Illegal characters found in URL curl: (6) Could not resolve host: bearer bash: u8dhr7g4a \ -H Cache: command not found
Hi, I have created summary index using collect command and in summary index my actual host, source field gets converted into orig_host, orig_source resp. Now I want to know is there any concept of m... See more...
Hi, I have created summary index using collect command and in summary index my actual host, source field gets converted into orig_host, orig_source resp. Now I want to know is there any concept of metadata for summary index like in normal index host,source,_time are default index-time extarcted fields and using host/source the search results become faster since it searches only those bucket containing searched host/source. similarly is there any concept in summary index? Or in summary index all fields are index-time extracted fields? Any guidance would be appreciate.
Hi guys. We have a dev environment Splunk cluster with a dev license that LnP and dev teams send their data to. They have a logging process on their systems, same as live, that is logging far too m... See more...
Hi guys. We have a dev environment Splunk cluster with a dev license that LnP and dev teams send their data to. They have a logging process on their systems, same as live, that is logging far too much data for our dev license. They don't need the entire data set in dev,  30% for example is fine for their uses in development(not LnP) for testing dashboards etc. To save them the need to re-write their code to only log every 3rd event, or a percentage of events for example, does anyone here know if it's possible to configure Splunk at input or Heavy Forwarder level to drop a percentage, or every x event for example?