All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I'm implementing some automations from Splunk to other tools, and I would like to create a drill down on one of my Splunk dashboards. I want it to perform a POST request to a custom API endpoin... See more...
Hi, I'm implementing some automations from Splunk to other tools, and I would like to create a drill down on one of my Splunk dashboards. I want it to perform a POST request to a custom API endpoint, sending the values of the clicked row as parameters. Has anyone ever implemented custom POST actions on a dashboard drill down? Is is possible? It would be something similar to the Workflow Actions, but I still didn't find a way to do this from a dashboard panel. Thanks!
Hi there! I'm playing a bit with the geostats sample from the Splunk Dashboards Example App. The samples produces a map with a breakdown by method (GET / POST) based on the coordinates creates a pie... See more...
Hi there! I'm playing a bit with the geostats sample from the Splunk Dashboards Example App. The samples produces a map with a breakdown by method (GET / POST) based on the coordinates creates a pie chart on the location. The sample query looks pretty simple... | inputlookup geomaps_data.csv | iplocation device_ip | geostats latfield=lat longfield=lon count(method) by method I've tried adding the Country name to show up on the tooltip box but I didn't have any luck. My goal is to keep the breakdown for the methods to then color code the charts as well as add the Country name on top of the methods. Any ideas on how can I accomplish that? TIA!
Hello, I am looking for best approch for installing  and managing forwarders. we have everything on Linux. After reading some docs and posts I think deployment server is mostly used for managing f... See more...
Hello, I am looking for best approch for installing  and managing forwarders. we have everything on Linux. After reading some docs and posts I think deployment server is mostly used for managing forwarder. Current Approch: we use to have around 50 forwarder on different server at a time. when we have  to monitor new machine. we use bash script which 1. copy the forwarder template to remote machine 2. update inputs.conf in forwarder with host and monitoring path. 3. start the forwarder. whenever we have to change something like adding new file to monitor, chaning path etc we use to manually update in forwarder. we dont keep same forwarder up its like once campaign over delete then and keep adding wheneven needed.  I have tried using Deployment server and I am able to update static file on already installed forwarder. Queries: 1. If we update inputs.conf in forwarder using deployment server. do I need to manually restart forwarder ? Queries 2 : is it possible to change inputs.conf dynamically accoring to machine where its deployed. Aslo restart forwarder after change ? example : everytime when we deploy we have to change path (/net/hp707srv/hp707srv1/apps/QCST_MIC_v3.1.46_MASTER) and host (HOST123) [monitor:///net/hp707srv/hp707srv1/apps/QCST_MIC_v3.1.46_MASTER/logs/mxlivebook/.../service.log] disabled = false host = HOST123 index = live sourcetype = mx_java_event 3. Is it possible to install forwarder  using deployment server ? Basially, I am looking to handle everything regarding forwarder from one certral location. Thank you in advance for suggestions.  
i have a multi select input and able to get the values but want to get all the label values.     <fieldset submitButton="false" autoRun="true"> <input type="multiselect" token="endpoint" searchWhe... See more...
i have a multi select input and able to get the values but want to get all the label values.     <fieldset submitButton="false" autoRun="true"> <input type="multiselect" token="endpoint" searchWhenChanged="true"> <label>Endpoints</label> <choice value="/project/api-aaa/v1.0">api-aaa</choice> <choice value="/project/api-bbb/v1.0">api-bbb</choice> <choice value="/project/api-ccc/v1.0">api-ccc</choice> <delimiter> OR </delimiter> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <default>/project/api-aaa/v1.0,/project/api-bbb/v1.0,/project/api-ccc/v1.0</default> <change> <set token="dropToken_label">$label$</set> // this is working partially and able to get first label which is `api-aaa` </change> </input> </fieldset>     `<set token="dropToken_label">$label$</set>` // this is working partially and able to get first label which is `api-aaa` i need to access all the labels selected ?
I need to enable HTTPS in my page so I asked the admin of the server and he gave me a .pem file. The .pem file appears to include the Certificate and the encrypted private key. In Splunk docs, there ... See more...
I need to enable HTTPS in my page so I asked the admin of the server and he gave me a .pem file. The .pem file appears to include the Certificate and the encrypted private key. In Splunk docs, there are reference of two different files: .pem and .key. How do I obtain those files? Do I need to ask for something else? I'd appreciate any help. Thanks.
i have  a case where i need to determine if a row has been repeated multiple times or not . it may have 4 common value columns, but time might differ. example below.. table: servername:data1:data2... See more...
i have  a case where i need to determine if a row has been repeated multiple times or not . it may have 4 common value columns, but time might differ. example below.. table: servername:data1:data2:data3:_time server1:10:20:30:25th Sep 2020 server1:10:20:30:26th Sep 2020 server1:10:20:30:27th Sep 2020 server2:20:30:10:28thSep 2020 I need output like below with a new eval field called occurrence which should have values "mulitple", or "single" based on occurrence at different times. can anyone help me with this, thanks.. servername:data1:data2:data3:_time:occurence server1:10:20:30:25th Sep 2020:multiple server1:10:20:30:26th Sep 2020:multiple server1:10:20:30:27th Sep 2020:multiple server2:20:30:10:28thSep 2020:single server3:20:30:10:28thSep 2020:single    
Hi fellow splunkers, I faced a mysterious issue where the number of triggered alerts do not match the number of emails received. When I check python.log, I see the alert is giving me this error 202... See more...
Hi fellow splunkers, I faced a mysterious issue where the number of triggered alerts do not match the number of emails received. When I check python.log, I see the alert is giving me this error 2020-09-25 18:49:01,765 +0000 ERROR sendemail:142 - Sending email. subject="Splunk Alert: to be deleted", results_link="http://aws-prod-east-splunk.megh.thingspace.com/app/search/@go?sid=scheduler__admin__search__RMD57f4b1593a5b5364b_at_1601059740_8497_BA4F469F-14CB-4CBF-A20F-40A798E7F698", recipients="[u'myemail@email.com']", server="top-smtp-proxy.ts-prod.cloud:587" 2020-09-25 18:49:01,765 +0000 ERROR sendemail:475 - (530, 'Authentication required', u'no-reply-top@verizon.com') while sending mail to: myemail@email.com     AND, I found this anomaly in my alert configuration.  Note that sendemail command from search bar worked and I did receive the email. So it's only giving me error for alerts or scheduled searches. Anyone else having this issue?   
This is the 2 splunk query that I have: | tstats latest(_time) as latest where index=* earliest=-48h by host | eval minutesago=round((now()-latest)/60,0) | tstats latest(_time) as latest where ind... See more...
This is the 2 splunk query that I have: | tstats latest(_time) as latest where index=* earliest=-48h by host | eval minutesago=round((now()-latest)/60,0) | tstats latest(_time) as latest where index=* earliest=-10m by host | eval minutesago=round((now()-latest)/60,0)   I need the Splunk Query to do the following:   The log feeds by the actual device products instead of just IPs. Deeper review of the logs by sourcetypes and sources (not just index=*) given that some tools are sending multiple feeds that are stored on  the same index files. Tracking short term and long term outages instead of just last 10 min and last 24 hrs. The use of charts to show visual state of the devices health check instead of tables. Line charts to show logs feeds baseline vs spikes for last 24hrs/7d/30d. Ability to drill down under specific stats. Assets pivoting from an IP/hostname to show full device info (there are multiple lookup tables that have the necessary data).   Please I need your help with the splunk query to do the above task.  
Is there a way we can automatically create alerts on Splunk. I am able to manually create alerts, but wondering how to create alerts using automation.
I have a comma delimited multivalue field that contains text and a digit in each value pair that I am trying to find the maximum digit and return the text and digit to the results.  My multivalue fi... See more...
I have a comma delimited multivalue field that contains text and a digit in each value pair that I am trying to find the maximum digit and return the text and digit to the results.  My multivalue field contains the following values: Linked to Historical Cyber Exploit,1 Historically Linked to Malware,1 Historically Linked to Penetration Testing Tools,1 Exploited in the Wild by Recently Active Malware,5 If I just do a simple max(fieldName) it returns the following: Linked to Historical Cyber Exploit,1 Which seems to be based off of the alphabetic interpretation of 'max'. What I want to return is: Exploited in the Wild by Recently Active Malware,5 I think I need to do an mvexpand() followed by a rex of the resulting field but I am at a loss of how to return anything other than a "5" instead of the whole line. Thanks in advance for any help!
So I am working off a query based off the Splunk app for *nix.  It uses the interfaces.sh.     query: index=os sourcetype=interfaces host=server Name=eth* | head 8 | eval status = if (RXbytes = "0"... See more...
So I am working off a query based off the Splunk app for *nix.  It uses the interfaces.sh.     query: index=os sourcetype=interfaces host=server Name=eth* | head 8 | eval status = if (RXbytes = "0", "UP", 'DOWN") | stats values(RXbytes) by Name   Basically I want to show the 8 interfaces, have the # of RX Bytes in each Single Value and color coded for UP/DOWN - which I set via the dashboard option  0-1 - Red 1-500 - Yellow 500 - Max - Green Also starting to wonder if I really need the eval statement in there?  I I would like it to look like ETH1   ETH 2  ETH 3  ETH 4   ETH 5  ETH 6  ETH 7  ETH 8 vice ETH1 ETH2 ETH 3 ETH 4 ETH 5 ETH 6 ETH 7 ETH 8   is that possible, sorry system is not connected so its kind of a pain to get screen shots.  
Hello everyone, I have the following pattern of logs and I'm trying to use rex to filter the values, but I'm having problems because of + in some events, can you help me? | rex field=_raw "DNIS... See more...
Hello everyone, I have the following pattern of logs and I'm trying to use rex to filter the values, but I'm having problems because of + in some events, can you help me? | rex field=_raw "DNIS:(?<ANI>\d+)" 2020-09-25 11:50:52.946-03:00 DNIS:+558730246133 2020-09-25 11:51:33.218-03:00 DNIS:994699160 2020-09-25 11:52:52.946-03:00 DNIS:994376160 2020-09-25 11:53:52.946-03:00 DNIS:+994699160 2020-09-25 11:54:52.946-03:00 DNIS:+558730246133  
Hello , We have Splunk Enterprise server on and installed the Splunk App for AWS and Splunk Add-on for AWS. Configure AWS account details in Splunk configurations with required IAM roles and permis... See more...
Hello , We have Splunk Enterprise server on and installed the Splunk App for AWS and Splunk Add-on for AWS. Configure AWS account details in Splunk configurations with required IAM roles and permissions but not able to pull the Cloud Watch Logs into on-premises Splunk server. Please refer the below snaps for same.   Tried same by installing the Splunk on AWS EC2 and Assign the role to EC2 instance and working fine. Can you please help on this? I have searched on the internet regarding the same but not not the concrete solution for this. I will appreciate your help. Thank You Suraj Shinde
I'm trying to look at all of our users using personal VPN who have accessed O365 (Sharepoint, OneDrive, etc.) from their personal systems. For starters we're trying to combine VPN+Azure+O365 activit... See more...
I'm trying to look at all of our users using personal VPN who have accessed O365 (Sharepoint, OneDrive, etc.) from their personal systems. For starters we're trying to combine VPN+Azure+O365 activity logs. Any advice on how to do that with the following information? - First attempt at combining returned correct results for the day but changed in 30 minutes for an unknown reason. - And yes the cip referenced in VPN logs is the external IP and NOT the VPN IP address. index=network sourcetype=syslog_vpn eventtype=vpnuser device_type="Personal Device" | rename vpn_uid as user | lookup uid2userinfo user OUTPUT FULL_NAME | rex "(?i) cip:(?P<cip>[^ ]+)" | lookup dnslookup clientip AS cip OUTPUT clienthost AS chost | table _time type user FULL_NAME device_type cip chost clientos country employee_type tunnel_mode - cip is the "ExtIP" field I want to match on (but once you start adding sub searches it breaks the rex for some reason) - I care about the fields in the table and would lke to add Workload and ObjectId fields to it from the o365_activity sourcetype index=azuread deviceDetail.trustType=null status.errorCode=0 - ipAddress is the "ExtIP" field I want to match on eventtype=o365_activity - ClientIP is the "ExtIP" field I want to match on - I care about the Workload and ObjectId fields and want them to be part of my results  
Hello , Im trying to run a audit search for high priority linux servers - should have the following in the search sudo login, failed login, login/logoff and account change and deletion. i was able... See more...
Hello , Im trying to run a audit search for high priority linux servers - should have the following in the search sudo login, failed login, login/logoff and account change and deletion. i was able to combine to searches with the "OR" command: index="ssh_login_index" sourcetype="linux_secure" (process=sshd session opened OR closed) host="linux server" but still cant combine the rest of the searches to the search above. Thanks in advance!
Hi, Did anyone succeeded with configuring Splunk with HCP? CacheManager is able to upload and download buckets - it works perfectly.   But every time Splunk tries to freeze bucket, transaction... See more...
Hi, Did anyone succeeded with configuring Splunk with HCP? CacheManager is able to upload and download buckets - it works perfectly.   But every time Splunk tries to freeze bucket, transaction fails with error 501 (not implemented). The same if I try to do it from splunk CLI to manually remove bucket/file. WARN S3Client - command=remove transactionId=0x7fb4332dXXXX rTxnId=0x7fb427bfXXXX status=completed success=N uri=https://splunk.XXXXXXXXXXX.com.pl/buckets/test/db/d7/cd/66~7FD81528-13C4-4063-A45C-26DE9D698D42/receipt.json statusCode=501 statusDescription="Not Implemented" payload="<?xml version='1.0' encoding='UTF-8'?>\n<Error>\n <Code>NotImplemented</Code>\n <Message>Only the current version of an object can be deleted.</Message>\n <RequestId>160104XXXXX31</RequestId>\n <HostId>aGNwLXIuYmXXXXXXXXXXXXXMTA4</HostId>\n</Error>\n\n" We tried to disable and enable versioning on HCP, does not help. Had anyone such issue? What additional is needed to configure on HCP or Splunk.
Hi   I get data from an CSV file and one of the filed imported is a JSON string called "Tags" which looks like that Tags = {"tag1": "toto" "tag2": "tata" "tag4": "titi"}  --> exemple for a line T... See more...
Hi   I get data from an CSV file and one of the filed imported is a JSON string called "Tags" which looks like that Tags = {"tag1": "toto" "tag2": "tata" "tag4": "titi"}  --> exemple for a line Tags = {"tag3": "toto" "tag4": "tata"}  --> exemple for another line   The delimitation between key and value is <colon>+<space> The delimitation between two key+value is <space>   I tried      | spath input=Tags     but when I do     | table tag1, tag2, tag3, tag4     I get value only for tag1.   I tried to find a way to solve it by looking other topics but I do not succed. I understood that my string is not correctly formatted like a "real" Json but I don't fin the command to convert my initialy field "Tags" into a correct Json format to apply the "spath" command   Is there anybody has an idea to do it   Thanks in adance
Hi Team, I designed one Splunk dashboard and I added some formatting like coloring some rows and highlighting text, but when I export this dashboard to PDF, the formatting is not getting reflected ... See more...
Hi Team, I designed one Splunk dashboard and I added some formatting like coloring some rows and highlighting text, but when I export this dashboard to PDF, the formatting is not getting reflected in PDF. Is there any way to achieve it? Thanks and Regards, Digvijay Gaikwad  
Hello, I'm trying to determine the Error rate for individual servicename . I'm having trouble while performing group by followed by error_rate determining query. This is how I calculate the Count f... See more...
Hello, I'm trying to determine the Error rate for individual servicename . I'm having trouble while performing group by followed by error_rate determining query. This is how I calculate the Count for the individual servicenames.   index=myindex ServiceName="foo.bar.*" |stats count by ServiceName   ServiceName Count foo.bar.apple 10 foo.bar.banana 20   The following query determines the failure rate i.e status NOT OK , for the entire service , in my case  apple and banana services.   index=myindex ServiceName="foo.bar*" | eventstats count(HTTPStatus) as Total | where HTTPStatus!=200 | stats count(HTTPStatus) as Error, values(Total) as Total | eval fail_rate = Error*100/Total | fields fail_rate   fail_rate 0.0012   I want to have something like below, individual error rates for the services foo.bar.apple, foo.bar.banana. ServiceName Count fail_rate foo.bar.apple 10 0.0010 foo.bar.banana 20 0.0014   This is the query I'm trying to achieve the above table. I'm aware that, we need store count for each service name and again we need to run the query separately to determine the fail count, we cannot do this in parallel.   index=myindex ServiceName="foo.bar*" | eventstats count(HTTPStatus) as Total by ServiceName | where HTTPStatus!=200 | stats count(HTTPStatus) as Error, values(Total) as Total | eval fail_rate = Error*100/Total | fields fail_rate     I appreciate your support and time! Vamshi.  
Log files are a set of records that Linux maintains for the administrators to keep track of important events. Linux provides a centralized repository of log files that can be located under the  /var/... See more...
Log files are a set of records that Linux maintains for the administrators to keep track of important events. Linux provides a centralized repository of log files that can be located under the  /var/log directory. The log files generated in a Linux environment can typically be classified into four different categories: Application Logs Event Logs Service Logs System Logs Our requirement to monitor those Linux logs either direct by machine/Server agent or Let me know if we have any Extension for same requirements. Please suggest how to monitor Linux logs/events