All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Need to monitor a website which when gets hit shows a popup with Username and Password. Tried below possibilities till now. 1. Used Website input App but in "Enter Credentials" page , it throws "... See more...
Need to monitor a website which when gets hit shows a popup with Username and Password. Tried below possibilities till now. 1. Used Website input App but in "Enter Credentials" page , it throws "Username and Password field name not detected" Error. 2. Tried passing credentials using below but getting unauthorized error. https://URL/insecurelogin?loginType=splunk&username=EMEA\abc&password=XYZ https://URL/insecurelogin?username=EMEA\abc&password=XYZ Any one has done such monitoring , please share your valuable inputs. Thanks
 i have huge data with dynamic URL value like below, how can i give hyperlink to that ? Ex: Name  Link aris    https://aerial.behind.com/app/view/c0433334d-4418-wede2323344-2223
Please help answer this question, thank you: For these two multivalued fields, you want the value in the "Recipient" field to correspond to the value in the "recipient_status". If the receipt is s... See more...
Please help answer this question, thank you: For these two multivalued fields, you want the value in the "Recipient" field to correspond to the value in the "recipient_status". If the receipt is successful, it corresponds to ";", If it fails, it corresponds to "'550 5.1.1 resolver.adr.recipnotfound, not found'". Is there a way to segment the values of these two fields and make one-to-one correspondence? The following are the values corresponding to these two fields Recipient="@000.com @123.com @456.com @789.com", recipient_status=";;'550 5.1.1 RESOLVER.ADR.RecipNotFound; not found';"
Beginner user here. PART 1 Wanting to track documents over multiple sources to ensure they reach their destination Source 1 - Source 2 or 3 - Source 4 Start Point (Sent) - Middle Points (Accept... See more...
Beginner user here. PART 1 Wanting to track documents over multiple sources to ensure they reach their destination Source 1 - Source 2 or 3 - Source 4 Start Point (Sent) - Middle Points (Accepted or Rejected) - End Point (Received)  Each document has the following ID = Unique to each document DATE \ TIME STAMP = Says what time the document arrived to that point DESCRIPTION = like a subject what the document contains All documents have a unique ID that is tracked on each source.  I want to track this ID and ensure that it has gone from source 1 ,2 or 3 and arrived at 4. If for some reason its in 2 and not in 4 display that Doc ID in a table. PART 2 - I can probably work this one out myself after I know how to link everything. After they are linked I would like to compare the time between when it was at source 1 to when it arrived at source 3. 
Hi all, i am creating a custom app which require PYTHON - Pandas module.  Can you please let me know how i can install pandas on Splunk enterprise and leverage that on my custom application.  w... See more...
Hi all, i am creating a custom app which require PYTHON - Pandas module.  Can you please let me know how i can install pandas on Splunk enterprise and leverage that on my custom application.  will this cause any issue on the splunk overall?   
How do I set a "Trigger Condition" on a Splunk report like you would  when creating an alert? My issue is that I have created a report that I want to generate an email from when Number of Results =... See more...
How do I set a "Trigger Condition" on a Splunk report like you would  when creating an alert? My issue is that I have created a report that I want to generate an email from when Number of Results = 0 ie, when no file has been uploaded/detected. Some people would argue why don't I just create an alert instead? My dilemma is, with an alert it won’t let you add a "Time Range" as I want my daily report to track the previous 7 day time range  My search string looks like this: index=it_sts_xfer_prod_us xferPath="*GIDM*" OR xferPath="*sailpnt*" OR xferPath="*identity*" OR xferPath="*InternalAudit*" xferFile="FILENAME.csv" | eval _time=_time-xferSecs | convert ctime(_time) as Time timeformat=%m/%d/%y
Working on a search where there's a field (Office Location) with about 5 different values that are stored in a lookup file. We're looking at attendance at a specific office (office 1) and differentia... See more...
Working on a search where there's a field (Office Location) with about 5 different values that are stored in a lookup file. We're looking at attendance at a specific office (office 1) and differentiating who's actually going in. Specifically, we want to isolate people assigned to office 1 and those that are assigned to a different office. The original search looks like this but it would populate all the locations rather than just office 1 or not.   index=index EVDESCR="event" READERDESC="reader" | lookup users.csv ID as EMPLOYEE_ID |timechart span=1d dc(CARDNUM) by Location limit=0     I tried using this eval statement to hopefully isolate the search to just two values. Yes, home office or no home office.      |eval Home=if(Location"office1", yes, no)      The problem is this eval statement doesn't work and I'm not sure what I'm doing wrong. Any help is appreciated. 
Hello community What is the most efficient way of retrieving a specific search performed or preferably, if possible, to regenerate a file (csv/pdf) export of results? So far I have located the sear... See more...
Hello community What is the most efficient way of retrieving a specific search performed or preferably, if possible, to regenerate a file (csv/pdf) export of results? So far I have located the search/job ID and worked my way back to a search string (SPL). Though I am curious, is it possible to “re-run” the SPL snippet and just “re-generate” the file-export for inspection? Otherwise, what is the fastest and easiest way to get from search/job ID to the actual SPL search query used to generate the file export? Best regards // G
Any advice on this search? Although it simply produces what I need, it also lumps the system name with it.   index=main s LogName=Security EventCode=4738 Account_Name="*"| table Account_Name | de... See more...
Any advice on this search? Although it simply produces what I need, it also lumps the system name with it.   index=main s LogName=Security EventCode=4738 Account_Name="*"| table Account_Name | dedup Account_Name
Hello, I have xml source files in a location where SPLUNK HF installed on it...these files are updated with new data...that means no new files are created .....when new data comes in... existing fil... See more...
Hello, I have xml source files in a location where SPLUNK HF installed on it...these files are updated with new data...that means no new files are created .....when new data comes in... existing files are appended, and new data added at the end of the existing files. I know SPLUNK has CRC to check duplicate entries based on the Hash Function. My question is.... will SPLUNK UF/HF be able to read only new data that is added at the end of existing file without any errors (or missing info) and send to the indexer.  Any recommendations/thoughts would be highly appreciated. Thank you so much for your support.
I am using Kali Linux and when I try to install the Splunk App for AWS the system keeps telling that the username and/or password are incorrect even though I am using the exact same credentials tha... See more...
I am using Kali Linux and when I try to install the Splunk App for AWS the system keeps telling that the username and/or password are incorrect even though I am using the exact same credentials that I used to log in. I have tried using "admin" as username, changing the password and many other things but it keeps giving me this error. I tried it with the Cloud instance and it gives me the same error. I would really appreciate some help on this, because I will not be able to make the most of Splunk without apps.
Whenever I have to make an addon work, even if it’s from the same vendor i.e. Microsoft in my case right now, will I need Client ID and Client Secret every time? https://splunkbase.splunk.com/app/4... See more...
Whenever I have to make an addon work, even if it’s from the same vendor i.e. Microsoft in my case right now, will I need Client ID and Client Secret every time? https://splunkbase.splunk.com/app/4564/ - Microsoft Graph Security API Add-On for Splunk (This one is configured) https://splunkbase.splunk.com/app/6207/ - Splunk Add-on for Microsoft Security (This one is not configured)
Hi Experts, I want to trigger an alert when a particular host for source=WinEventLog:Security is not reporting to splunk from last 1 hour. I have a list of 30 critical hosts and for those I have cr... See more...
Hi Experts, I want to trigger an alert when a particular host for source=WinEventLog:Security is not reporting to splunk from last 1 hour. I have a list of 30 critical hosts and for those I have created a csv lookup as shown below DC_Machines.csv   host               source abc              WinEventLog:Security bcd              WinEventLog:Security xyz              WinEventLog:Security What I have achieved so far | inputlookup DC_Machines.csv | join type=left host [metadata type=hosts index=os_windows index=os_windows_dc ] | fillnull recentTime | where recentTime < relative_time(now(), "-1h") | fields host,recentTime,source above gave me a host from lookup table which is not reporting at all(fine) but how about those hosts which are reporting except source=WinEventLog:Security What I want above query should only return those host which is missing only one source=WinEventLog:Security My approach might be completely wrong or may be I am missing on something .I tried to add filter on source which is not working in above logic. Any suggestions please . Thank you in advance
Hello Team,                           We are using Enterprise security in our environment and we have created correlation searches to trigger an alerts. Out of those there is one particular use case... See more...
Hello Team,                           We are using Enterprise security in our environment and we have created correlation searches to trigger an alerts. Out of those there is one particular use case which is related to Process injection, We are receiving maximum number of incidents on this due to one Here I am not giving the name of the exe file due to security concerns: I will be using it as ___.exe Shellcode Injection activity detected. Process "___e.exe" has injected itself into ___.exe, ***.exe, +++.exe, ^^^.exe like this we are receiving many incidents. I have some doubts on this: 1. What is this Process injection and why the process are getting injected to others? , 2. How could we stop this to getting injected to others in Splunk  (Or from Endgame), 3. We want to fine tune this use case, Is there any alternative to suppress these alerts from Endgame, 4. We have whitelisted some of these alerts with Source process path and Target process path, But still ___.exe still it getting triggered with new target paths. Its always injecting into some other targets. Can anyone please explain in clear cut, I am a rookie in cyber security. Not exactly getting many things. But out of those this is one which is hunting me. Thanking you in advance. 
I have apis which has params in between and trying to  match the api from csv but it doesnt show when using lookup. eg : /serviceName/api/127364/api   which is shown in access logs  i have /service... See more...
I have apis which has params in between and trying to  match the api from csv but it doesnt show when using lookup. eg : /serviceName/api/127364/api   which is shown in access logs  i have /serviceName/api/*/api in my csv file.  Any idea how this works?
Working on a fresh install of Stream into an on-prem distributed environment with a small number of endpoints. I'm not sure where to install and operate Stream from and I've seen differing instructio... See more...
Working on a fresh install of Stream into an on-prem distributed environment with a small number of endpoints. I'm not sure where to install and operate Stream from and I've seen differing instructions from 2019-present. Is the current best practice to install and operate Stream from a standalone server or install and run from the deployment server?
Hi,   I created a splunk server on AWS and using the UI I constructed an HEC to listen for some logs. I am using docker's splunk logging driver to send the logs. If I leave the config the sam... See more...
Hi,   I created a splunk server on AWS and using the UI I constructed an HEC to listen for some logs. I am using docker's splunk logging driver to send the logs. If I leave the config the same on both servers, I receive the error: "Error response from daemon: Options "https://<IP>:8088/services/collector/event/1.0": x509: certificate relies on legacy Common Name field, use SANs instead" So I tried to change splunk config so that it will work with my self signed certificate (which uses SANs). I did this by changing the inputs.conf (in which the HEC was configured, bizarrely enough under" $SPLUNK_HOME/etc/apps/search/...") to have the [http] stanza with the path of the self signed cert:       [root@machine introspection]# cat $SPLUNK_HOME/etc/apps/search/local/inputs.conf [http] serverCert = /opt/splunk/etc/auth/certs/root.pem [http://test] disabled = 0 host = <ip> sourcetype = generic_single_line token = <token>       I then moved the relevant [http] stanza to where I believe it should be, (.../apps/splunk_httpinputs/...) but this didn't help. In fact, what happened was as soon as I put this stanza in, connections via SSL to the ip of splunk with the relevant port do not complete, for example:       openssl s_client -connect <ip>:8088         I would appreciate assistance with either fixing the original SANs issue (as it's splunk logging driver on docker), or with the issue of using self-signed on HEC. Thanks!
Hi A team has asked me that they need to keep 3 months' Data. I have told them that we have limited space on the discs and it is a function of Data, not Time. (I know an index gets full and data ... See more...
Hi A team has asked me that they need to keep 3 months' Data. I have told them that we have limited space on the discs and it is a function of Data, not Time. (I know an index gets full and data will drop off at the back) But how do I know how far my data goes back, so I can judge if I need to get more Disk space, or increase the size of the index. Below is a screen show taken from 1 of the indexes, however, I don't believe the reading. I know that I keep up to 3-6 months of data but not years. So  If I try running a search like this I don't think it will end. Also, I don't believe the data in 2017, I think somehow it got back dated Anu help would be great - cheers
We get a weekly ingest of a data set for our vulnerability management. Each line contains a unique value matching a vulnerability with a server I want to be able to report on: a. how many new vul... See more...
We get a weekly ingest of a data set for our vulnerability management. Each line contains a unique value matching a vulnerability with a server I want to be able to report on: a. how many new vulnerabilities are in this weeks report compared to last week and b. how many vulnerabilities have been fixed (so are not reported) in this weeks list compared to last week I'm looking for splunk to tell me whats new and whats missing week by week but also track these over the long term.  Cant seem to get any meaningful results with a 'set diff' search   Any help gratefully received!!
I'm looking to find a way to have multiple nested filters in a dashboard. Currently I'm creating attendance charts for our estaff and we require some more granularity due to the amount of unique data... See more...
I'm looking to find a way to have multiple nested filters in a dashboard. Currently I'm creating attendance charts for our estaff and we require some more granularity due to the amount of unique data there is. The goal is to be able to isolate 3 values. Function, Hierarchy, and Cost Center. Currently there are 3 functions, 28 hierarchies, and nearly 200 cost centers. With the nested filters I hope to be able to choose a function which would only populate hierarchies under that function. Then when choosing a hierarchy only populating the cost centers under it.    The issue I'm running into is that I'm able to create a two filters but once I add the 3rd filter the entire search breaks and the charts no longer update.