All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I was given a request to use csv lists (i.e. lookups) with keyword values to find USB writes in an index where a field name of "file-name" is file info written to usb. The file-name values are... See more...
Hi, I was given a request to use csv lists (i.e. lookups) with keyword values to find USB writes in an index where a field name of "file-name" is file info written to usb. The file-name values are not consistent and most often the value returns as a file path, like "D:/Downloads/foo/bar/foo-bar.txt" or something like that. So file-name is actually a file path. I was asked to use a csv supplied to me as lookup criteria, like this... keyword.csv is the lookup name keyword keyword-ID *red* 34948-kjas *green* 89050-kjec *blue* 89008-nkme the column header fields are "keyword" (which is a wildcard string) and "keyword-ID" (which is a rando ID) I wrote a query like this... index=foo sourcetype=bar [|inputlookup keyword.csv |fields keyword | rename keyword as file-name] |stats count by file-name and I get the counts of each unique file-name which is what I thought the requestor wanted but that is not the case. They want to know the count by keyword, like red = 5 green = 1 and blue =3 etc... So I am stuck getting the results from my query piped back into a lookup to count by the key words... I am not sure how I get this done. I was advised in slack to use wildcard matching to reverse the lookup but I could not get it to work. index=foo sourcetype=bar [|inputlookup keyword.csv |fields keyword | rename keyword as file-name] | >>>> ? Any advice appreciated!! UPDATE So after a few iterations I came up with this, but don't know if this is the best way to do it.... from here... index=foo sourcetype=bar [|inputlookup keyword.csv |fields keyword | rename keyword as file-name] | >>>> ? I verified the lookup definition for keyword.csv is WILDCARD(keyword) under advanced settings as well as all the keywords are * keyword *... index=foo sourcetype=bar [|inputlookup keyword.csv |fields keyword | rename keyword as file-name] | rename file-name as keyword | lookup keyword.csv keyword OUTPUT keyword as Matched |stats count by Matched now I get for results... Matched Count *red* 10 *green* 5 *blue* 2 These are the results they want. anyone know how to improve this? Thank you
Hello, we are using Splunk Heavy Forwarder to consume data from Kafka topics (flow #1) and forward it to the Splunk Server (flow #2), i.e. Kafka Cluster --- (1) ----> Splunk HF ----- (2) -----> Sp... See more...
Hello, we are using Splunk Heavy Forwarder to consume data from Kafka topics (flow #1) and forward it to the Splunk Server (flow #2), i.e. Kafka Cluster --- (1) ----> Splunk HF ----- (2) -----> Splunk Backend system Kafka cluster has been configured to support SSL/TLS encryption on the port 9093, e.g. bootstrap-endpoint:9093 Could you please provide me some guidance how to configure the Splunk Heavy Forwarder to be able to consume the Kafka topics based on SSL/TLS. Thank you very much for your guidance in advance. Best regards Yongyuth
I want to edit server.conf for around 600 servers, is there anyway we can edit them all at a time.
hi I need to be able to know when a dashboard is 100% completed, how can i get a token for this? I had an idea of doing something like this after each search. <done> <condi... See more...
hi I need to be able to know when a dashboard is 100% completed, how can i get a token for this? I had an idea of doing something like this after each search. <done> <condition match="'job.resultCount'>0"> <set token="PAGE_LOADED1">LOADED</set> </condition> </done> However this is a very manual way of doing it (PAGE_LOADED1 + PAGE_LOADED2 + PAGE_LOADED3 > 0 etc..), i was hoping for an easy peace of code that i can put at the top of each dashboard. Thanks Robert
Hello I have some directories that I need to monitor. Using updated inputs for the TA_nix app I am adding syslog/linux:audit data is specific paths. It mostly works as expected BUT I had a few out... See more...
Hello I have some directories that I need to monitor. Using updated inputs for the TA_nix app I am adding syslog/linux:audit data is specific paths. It mostly works as expected BUT I had a few outliers. Heres the basic directory structure: /var/log is standard BUT the messages coming from other hosts goes to a path /var/log/remote in this path is the 2 types of logs: syslog and linux:audit as well as .bz2 which we never want indexed from any path. /var/log/remote/202/02/<environment_name>/messages/<files> /var/log/remote/202/02/<environment_name>/audisp/<files> within each one of these is an archive directory as well, it contains files being written to and .bz2 which we never want indexed from any path. /var/log/remote/202/02/<environment_name>/messages/archive/<files> /var/log/remote/202/02/<environment_name>/audisp/archive/<files> So the inputs I created looks like this: [monitor:///var/log] whitelist=(\.log|log$|messages|secure|auth|mesg$|cron$|acpid$|\.out) blacklist=(lastlog|anaconda\.syslog|\.bz2$|audisp|\_audisp.log|\audisp.log\-) index=nix_os disabled = 0 [monitor:///var/log/remote/*] whitelist=(messages|\_messages\.log|_messages\.log\-) blacklist=(\.bz2$|audisp|\_audisp.log|\audisp.log\-) index=nix_os sourcetype = syslog disabled = 0 recursive=true [monitor:///var/log/remote/*] whitelist=(audisp|\_audisp.log|\audisp.log\-) blacklist=(\.bz2$|\_messages\.log|_messages\.log\-) index=nix_os sourcetype = linux:audit disabled = 0 recursive=true What I have found is that there are files with the sourcetype set as the filename, which it should be either syslog or linux:audit since the path is: /var/log/remote/2020/02/corp/messages/archive/hostname.domain.com_messages.log-20200206 got the sourcetype set to the file name: hostname.domain.com_messages.log-20200206 Also these did not index: /var/log/remote/2020/02/corp2/audisp/archive/: <ip-hidden>_messages_audisp.log-20200204 <ip-hidden>_messages_audisp.log-20200205 <ip-hidden>_messages_audisp.log-20200206 Can anyone tell me: 1.Why did the messages file hostname1234.domain.com_messages.log-20200206 get the sourcetype set to the file name (some are set to "too-small" as well) sourcetype=hostname1234.domain.com_messages or sourcetype=hostname1234.domain.com_messages-too_small Why didnt the /audisp directory and the corresponding files index? For example: /var/log/remote/2020/02/corp2/audisp/archive/<ip-hidden>_messages_audisp.log-20200204 Thanks for you assistance
Hello- My current setup: Device Syslog --> Syslog Server w/ Splunk HvyFwd --> Splunk Indexer When I restart my Heavy Forwarder server or Splunkd, it takes up to 30 minutes to begin forwarding... See more...
Hello- My current setup: Device Syslog --> Syslog Server w/ Splunk HvyFwd --> Splunk Indexer When I restart my Heavy Forwarder server or Splunkd, it takes up to 30 minutes to begin forwarding syslogs to the indexer. Is this due to the number of devices and folders stored within the syslog server, and is there a way to speed this process up? Thanks,
Hi everybody, I'm trying to deploy 2 apps in an universal forwarder from a deployment server. The problem that I'm encountering is that when the deploy finished and restart the Splunk Universal Fo... See more...
Hi everybody, I'm trying to deploy 2 apps in an universal forwarder from a deployment server. The problem that I'm encountering is that when the deploy finished and restart the Splunk Universal Forwarder service the apps deployed doesn't work instead if I deploy only 1 app the app work and I recieve the logs. My configuration is the following: In my Universal Forwarder I have: o) App1 o) App2 The input.conf file from App1 has this config: [WinEventLog://ForwardedEvents] index=index1 sourcetype=sourcetype1 whitelist= 4100,4104,4103 evt_resolve_ad_obj=1 renderXml=0 And the App2 has the same configuration but changing the events recieved: [WinEventLog://ForwardedEvents] index=index2 sourcetype=sourcetype2 blacklist= 4100,4104,4103 evt_resolve_ad_obj=1 renderXml=0 This apps works separately but together not. Exists any kind of limitless to use several apps in an unique universal forwarder.
I have web logs and I want to define any kind of automatic scripts that analyze web pages or any dump attempt. Of course, I can do this based on a lot of unique URLs, but I want to try to do it thr... See more...
I have web logs and I want to define any kind of automatic scripts that analyze web pages or any dump attempt. Of course, I can do this based on a lot of unique URLs, but I want to try to do it through an abnormal speed. What fields I have: src_ip, user, url. http_method, session_id, http_user_agent, response_code, url_length, bytes.
We are running Sysmon on Amazon WorkSpaces. We are trying to get the Sysmon (and other) logs into Splunk. We are currently trying to use a forwarder on the hosts. We run "splunk clone-prep-clear-c... See more...
We are running Sysmon on Amazon WorkSpaces. We are trying to get the Sysmon (and other) logs into Splunk. We are currently trying to use a forwarder on the hosts. We run "splunk clone-prep-clear-config" before creating the bundles, however the problem we are encountering is that when we create our bundles AWS is doing something behind the scenes (likely a start up and reboot) that is setting the GUIDs before the bundle is finalized. This means that every forwarder has the same GUID. Has anyone else worked with WorkSpaces logs and how did you overcome this challenge? The other option I was thinking of trying was seeing if we can install the CloudWatch Agent and get the logs from CloudWatch instead of directly from the host.
Hi All, we have a requirement for a project: 1. we need to extract all tiers and node details and dump into a CSV file 2. we need the app and machine agent status (agent up, down or % availab... See more...
Hi All, we have a requirement for a project: 1. we need to extract all tiers and node details and dump into a CSV file 2. we need the app and machine agent status (agent up, down or % availability) details in the extraction can anyone suggest some Rest API to extract the required data? Regards, Mandar Kadam
Hi team, I have a problem in the functioning of splunk application for infrastructure, when I launch the script under the command line of my host (ubunto 16) I always find this error : Failed to ... See more...
Hi team, I have a problem in the functioning of splunk application for infrastructure, when I launch the script under the command line of my host (ubunto 16) I always find this error : Failed to install libcurl package. exiting .. Script : export SPLUNK_URL=X.X.X.X && export HEC_PORT=8088 && export RECEIVER_PORT=9997 && export INSTALL_LOCATION=/opt/ && export HEC_TOKEN=eb8d0b7d-1a8c-4ba2-8997-107acf610cf7 && export SAI_ENABLE_DOCKER= && export DIMENSIONS= METRIC_TYPES=cpu,uptime,df,disk,interface,load,memory,processmon METRIC_OPTS=cpu.by_cpu LOG_SOURCES=/etc/collectd/collectd.log%collectd,\$SPLUNK_HOME/var/log/splunk/*.log*%uf,/var/log/syslog%syslog,/var/log/daemon.log%syslog,/var/log/auth.log%syslog AUTHENTICATED_INSTALL=Yes && wget --no-check-certificate http://172.17.1.51:8000/static/app/splunk_app_infrastructure/unix_agent/unix-agent.tgz && tar -xzf unix-agent.tgz || gunzip -c unix-agent.tgz | tar xvf - && cd unix-agent && bash install_uf.sh && bash install_agent.sh && cd .. && rm -rf unix-agent && rm -rf unix-agent.tgz Any help please ?
I would like to define the value of a variable, lets call it 'infile' based on the value of a token selected via radio button. Pseudocode: If rbutton=yes then infile=inputfileA.csv if rbutto... See more...
I would like to define the value of a variable, lets call it 'infile' based on the value of a token selected via radio button. Pseudocode: If rbutton=yes then infile=inputfileA.csv if rbutton=no then infile=inputfileB.csv Any help appreciated.
How I can run python commands from my Mac to retrieve data from Splunk. I am going through the splunk documentation - https://docs.splunk.com/Documentation/Splunk/6.2.1/Admin/AbouttheCLI Settings >... See more...
How I can run python commands from my Mac to retrieve data from Splunk. I am going through the splunk documentation - https://docs.splunk.com/Documentation/Splunk/6.2.1/Admin/AbouttheCLI Settings > Server settings does not show General settings to find installation path How do we install Splunk on mac Which path should we use to run CLI
Splunk_TA_paloalto is not parsing the logs : inputs.conf : [monitor:///data/splunkapp/syslog/MSSLCPRY01/paloalto_fw//.log] sourcetype = pan:log index = it host_segment = 6 disabled = fa... See more...
Splunk_TA_paloalto is not parsing the logs : inputs.conf : [monitor:///data/splunkapp/syslog/MSSLCPRY01/paloalto_fw//.log] sourcetype = pan:log index = it host_segment = 6 disabled = false Is it mandatory to keep the index pan_log? Palo alto logs are sending to syslog server/HF and TA installed on syslog/HF. Can someone please help whats going wrong in this.
i'm trying to create a data model with child subsets and calling this in a search. However the searches are calling the whole index rather than the subset - How do I need to adjust the setup to get t... See more...
i'm trying to create a data model with child subsets and calling this in a search. However the searches are calling the whole index rather than the subset - How do I need to adjust the setup to get this to stop searching everything ?
Hi guys, I am new to splunk. I have multiple events that looks like this: - 2020-02-07 07:21:20 action_time="2020-01-02 07:21:20.39", id_client="1234", ticket="1", - 2020-02-07 07:21:20 action... See more...
Hi guys, I am new to splunk. I have multiple events that looks like this: - 2020-02-07 07:21:20 action_time="2020-01-02 07:21:20.39", id_client="1234", ticket="1", - 2020-02-07 07:21:20 action_time="2020-01-02 07:22:20.39", id_client="4567", ticket="2" - 2020-02-07 07:21:20 action_time="2020-01-02 07:23:20.39", id_client="1234", ticket="2" - ... I would like to see transaction like this: in All events, find the first event where id_client = "1234" and ticket="1". If match, find next event with the same id_client, but the ticket= "2". so, for the same client, find first ticket=1, following after the ticket=2. I tried with: ...| transaction action startwith='1' endwith='2' but it does not work how can we do this in splunk ? I thank you i advance,
In each JSON event that I put into Splunk, I have a field with the format: "field": "1:2:3:4" However, whenever I try to run a search using this field, it always says that there are 0 results,... See more...
In each JSON event that I put into Splunk, I have a field with the format: "field": "1:2:3:4" However, whenever I try to run a search using this field, it always says that there are 0 results, even though I can see plenty of events with this field. One work around I found was to use spath, and then I was able to search using it, but I'd rather not have to do that every time Thanks in advance for any help.
Have a requirement to create a dashboard which will give instance level billing breakup for particular service like Under EC2 Service instance A is occurring $xyz cost. Please share some Idea ho... See more...
Have a requirement to create a dashboard which will give instance level billing breakup for particular service like Under EC2 Service instance A is occurring $xyz cost. Please share some Idea how we can achieve it using aws add-on?
When I checked the system health of phantom, it shows that there is only less free memory and used memory and the rest of the memory is Cached. When I restarted Phantom, there was not much change in ... See more...
When I checked the system health of phantom, it shows that there is only less free memory and used memory and the rest of the memory is Cached. When I restarted Phantom, there was not much change in the cache memory usage. How can I free up cache memory?
We have a not-at-all overloaded ES search head with a separate volume for dispatch with plenty of room that gives us 500MB warnings. We also have a few weekly-scheduled searches which bring back 100... See more...
We have a not-at-all overloaded ES search head with a separate volume for dispatch with plenty of room that gives us 500MB warnings. We also have a few weekly-scheduled searches which bring back 100ish rows of results with dozensish fields with default values of "2p" for "dispatch.ttl" but the results are always gone after 2 days. We are on 7.3.latest. We have tried setting it to 2 weeks worth of seconds and that did not work. What could be causing this? What logs should I look at/for?