All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk community,   I am having some troubles filling my null values with conditional field values.   I have events that go through steps  (1-7) and each step is one line eg. one event. How... See more...
Hello Splunk community,   I am having some troubles filling my null values with conditional field values.   I have events that go through steps  (1-7) and each step is one line eg. one event. However, if there is an Error line there is no step number. In that case I would like to fill the N/A value by the same step value as the previous line/event.   Here an example:                customer_number  status  step    1234 OK 5   1234 OK 4   1234 KO N/A Here it should be step number 3  1234 Ok 3   1234 Ok 2   1234 OK 1     I would like to fill the N/A value with the step number of the previous line so step 3.  I tried it with eventstats and streamstats by getting the last step for OK but the KO line is not necessarily the last line for the customer. I also tried it with filldown but it always takes the line above the KO and not the one pior.  Here is my latest search query that I tried. | eventstats latest(step) as laststep by customer_number | eventstats latest(status) as laststatus by customer_number | eval step=if(status="KO" AND laststatus="KO" AND step="",  laststep,  step) |filldown step, Step   This works when the KO is in the last step. The filldown command would be usefull if it was able to use conditions with it. The ideal solution would a reverse filldown command that would fill the N/A with the values of the events and their fields prior to the KO.   Please help!  thank you in advance! 
We have VPC flow and firewall logs coming into Splunk from our Kubernetes deployments in GCP. I want to be able to map our containers onto this information so I can track individual container network... See more...
We have VPC flow and firewall logs coming into Splunk from our Kubernetes deployments in GCP. I want to be able to map our containers onto this information so I can track individual container network activity. The problem is that the IP addresses are frequently recycled between different containers. I've created a search which maps out which containers had what IP addresses at which times:   Container Name / Start Time / End Time / IP address   I can use this information to search for the flow/firewall log events for an individual container:   index=networklogs earliest=startTime latest=endTime "IP address"   What I want to do is be able to map the container names onto the networking data so that I can track networking events via the unique container names rather than IP addresses which are continually recycled between different containers as they are created and destroyed. For example, to add the container names into the events in the Network_Traffic.All_Traffic data model. The mapping also needs to be persistent so we can look back over historical data.   One idea is to try and add the container names as a key value pair lookup at ingest but any other ideas on the best way to go about this would be great. Thanks
Hi All,   I would like to search for a specific 7 character length of data from 2 tables. Within these 2 tables I have multiple data length, so it varies from 1 to 20, but I need only those which a... See more...
Hi All,   I would like to search for a specific 7 character length of data from 2 tables. Within these 2 tables I have multiple data length, so it varies from 1 to 20, but I need only those which are exactly 7 digits/characters long. After I  filtered for these 7 digits from these 2 tables, I would like to put them into a common name/ID which I can use for my lookup. Do you have any idea how can I do it and use with the above lookup? The lookup search what I am using is: index=myindex| lookup my_lookup field1 OUTPUTNEW mylookup_name | eval field1=IF(ISNULL(mylookup_name),field1,field2) | rename field1 as "NAME" | chart count by "NAME" | sort -count   Do you have any idea how can I do it and use with the above lookup?
Hello The search below returns results but the where condition doesnt works   `wire` | eval USERNAME=upper(USERNAME) | lookup aps.csv NAME as AP_NAME OUTPUT Building | lookup lookup_cmdb_fo_all... See more...
Hello The search below returns results but the where condition doesnt works   `wire` | eval USERNAME=upper(USERNAME) | lookup aps.csv NAME as AP_NAME OUTPUT Building | lookup lookup_cmdb_fo_all HOSTNAME as USERNAME output BUILDING_CODE | eval Building=upper(Building) | stats last(Building) as "GB", last(BUILDING_CODE) as "SB" by USERNAME | where NOT ('GB' = 'SB')   I have tried many things   | where NOT ('GB' = 'SB') | where NOT like ('GB','SB') | where NOT ("GB"="SB")   what is the pb please?
Hello,   I wonder if you have any suggestion as to why, over time, results of a stats count may vary for a past time frame. I have a planned report doint this search each week: index=x1 OR index=... See more...
Hello,   I wonder if you have any suggestion as to why, over time, results of a stats count may vary for a past time frame. I have a planned report doint this search each week: index=x1 OR index=x2 OR index=x3 OR index=x4 | eval tempo = strftime(_time,"%Y-%m") | stats count by tempo,index | sort by tempo, index 2 of the 4 indexes are closed (no new event since at least a year) => older events Recent events are indexed on the other 2 indexes. Over the past period in the report, I should have each week the same result (ie for 2016-04 index=x1 result_stats_count=125469522). Except not. Beginning 3 weeks ago, results changed over some period (2016-04 for example) even if there is no new data for that period of time (I checked : no new event has been indexed for these index this year) In some cases, the number increases, in other it decreases, or both over 3 weeks. This data has not yet reach the retention limit. The splunk platform is 2 SHC, 1 indexer cluster multisite and a few forwarders. Operation  that has been done the last 3 weeks : new cluster bundle conf with rolling restart, some SHC rolling restart.   I didn't find anyting helpfull in _internal explain this behaviour. Do you have any idea ? Pointers ?   Thanks a lot, Ema
I'm writing an alert action custom app. However the embedded python3 will not let me use a required library that has a compiled C library in it (.so file). /opt/splunk/bin/python3 for some reason won... See more...
I'm writing an alert action custom app. However the embedded python3 will not let me use a required library that has a compiled C library in it (.so file). /opt/splunk/bin/python3 for some reason won't link it. Is there a way to tell Splunk's (I'm on splunk 8.1.0) to load the compiled C libraries? That would be the best way. I noticed that the system /usr/bin/python3 works just fine. How do I tell Splunk to use that in my alert action script conf file or the script itself? I have `splunk.version=python3` in the config file and I've tried both '#!/usr/bin/python3' and '#!/usr/bin/env python3' at the top of my script. I'm thinking that I may need to write a shell script shim that simply calls my script the right way. But I'd like to avoid that if possible (not even sure it would work in the newer versions of Splunk. Thanks!    
Hello all! I´m so lost trying to get full process tree to visualize it in dendogram https://splunkbase.splunk.com/app/5153/ , i hope anybody could help me Data example: parent sourcePro... See more...
Hello all! I´m so lost trying to get full process tree to visualize it in dendogram https://splunkbase.splunk.com/app/5153/ , i hope anybody could help me Data example: parent sourceProcess child destinationProcess 906 PanGpHip.exe 942 cmd.exe 906 PanGpHip.exe 934 cmd.exe 906 PanGpHip.exe 938 cmd.exe 906 PanGpHip.exe 930 cmd.exe 906 PanGpHip.exe 926 cmd.exe 906 PanGpHip.exe 921 cmd.exe 906 PanGpHip.exe 913 cmd.exe 246 PanGPS.exe 906 PanGpHip.exe 16 svchost.exe 242 RuntimeBroker.exe 6 services.exe 243 sppsvc.exe   Data needs to be show as following capture to work    
Hey,   I have just download and try to install  splunkforwarder but it gives me an error: #download package like this:     wget -O splunkforwarder-8.1.0-f57c09e87251-linux-2.6-amd64.deb 'https:... See more...
Hey,   I have just download and try to install  splunkforwarder but it gives me an error: #download package like this:     wget -O splunkforwarder-8.1.0-f57c09e87251-linux-2.6-amd64.deb 'https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=8.1.0&product=universalforwarder&filename=splunkforwarder-8.1.0-f57c09e87251-linux-2.6-amd64.deb&wget=true' dpkg -i splunkforwarder-8.1.0-f57c09e87251-linux-2.6-amd64.deb (Reading database ... 118207 files and directories currently installed.) Preparing to unpack splunkforwarder-8.1.0-f57c09e87251-linux-2.6-amd64.deb ... This looks like an upgrade of an existing Splunk Server. Attempting to stop the installed Splunk Server... splunkd is not running. Unpacking splunkforwarder (8.1.0) over (8.1.0) ... Setting up splunkforwarder (8.1.0) ... cp: cannot stat '/opt/splunkforwarder/etc/regid.2001-12.com.splunk-UniversalForwarder.swidtag': No such file or directory complete     machine info :     DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS" NAME="Ubuntu" VERSION="18.04.2 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.2 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic     What should i do ?  
Hi,  I had a good base search for a calculation and alerting when an upload/download happens, but now I tried to tidy it up and convert bytes to KB and show a percentage as a "10%" instead of just ... See more...
Hi,  I had a good base search for a calculation and alerting when an upload/download happens, but now I tried to tidy it up and convert bytes to KB and show a percentage as a "10%" instead of just "10", but somewhere along the way my search breaks...  When i try to show bytes as KB using this: | eval total_KB_bytes=round(total_bytes/1024,0)."KB" | eval KB_bytes_in=round(bytes_in/1024,0)."KB" | eval KB_bytes_out=round(bytes_out/1024,0)."KB"  my Classification and  Alert break.  Any help would be greatly appreciated!  The The original search is: index=zscaler http_method IN ("POST", "PUT") | rename bytes as "total_bytes" | table _time index user src_user_email dest app appclass category http_method filetype total_bytes bytes_in bytes_out | eval user_bytes_perc_download = round((bytes_in/total_bytes)*100,2) | eval user_bytes_perc_upload = round((bytes_out/total_bytes)*100,2) | eval Classification=case(user_bytes_perc_download > 70,"download", user_bytes_perc_upload > 70,"upload", user_bytes_perc_download <70 AND user_bytes_perc_upload <70, "none") | eval Alert=if((Classification="download" OR Classification="upload") AND total_bytes > 20000, "YES", "NO")
Hi, I'm  fairly new to splunk and wants to create many dashboards . It feels wrong to create dashboard after dashboard etc. Therefore I'm  wondering if  there is a way to create routines as a templa... See more...
Hi, I'm  fairly new to splunk and wants to create many dashboards . It feels wrong to create dashboard after dashboard etc. Therefore I'm  wondering if  there is a way to create routines as a template where users can take it as a basis and create their own dashboards if they needed? Is it possible? If the answer is yes then how? Are there any examples of that? Maybe I'm thinking wrong... Thanks in advance! Br RR
Hi All, As per the below link i am installing the JAVA agent.Post all the configuration application restart done however java agent is not starting. I checked logs nothing is generated. Its windows... See more...
Hi All, As per the below link i am installing the JAVA agent.Post all the configuration application restart done however java agent is not starting. I checked logs nothing is generated. Its windows machine and Java application running . https://docs.appdynamics.com/display/PRO45/Apache+Tomcat+Startup+Settings. Kindly suggest me . Regards, Bhunesh
Hello, I have data inputs configured with HEC coming in to index=A and source=http:sourcename1. I now have logs of similar type which I want to come into the same index (i.e index=A) and with the s... See more...
Hello, I have data inputs configured with HEC coming in to index=A and source=http:sourcename1. I now have logs of similar type which I want to come into the same index (i.e index=A) and with the same source i.e. http:sourcename1, but which would require me to setup a new HEC connection with a new token. Please let me know if this is possible? Thanks.
Hello fellow Splunk community members I've finally got a workable solution for running Snort on my home router, outputting JSON to send across to my Raspberry Pi-homed UF. It works a treat, but for ... See more...
Hello fellow Splunk community members I've finally got a workable solution for running Snort on my home router, outputting JSON to send across to my Raspberry Pi-homed UF. It works a treat, but for one thing. If you're curious, it's dd-wrt running Entware Snort, processing u2fast logs into JSON with python3-idstools. The Snort JSON output log on the router looks like this:     {"msg": "ET POLICY iTunes User Agent", "classification": "Potential Corporate Privacy Violation", "sensor-id": 0, "event-id": 354, "event-second": 1605757495, "event-microsecond": 660579, "signature-id": 2002878, "ge nerator-id": 1, "signature-revision": 6, "classification-id": 33, "priority": 1, "sport-itype": 57226, "dport-icode": 80, "protocol": 6, "impact-flag": 0, "impact": 0, "blocked": 0, "mpls-label": null, "vlan-id": null, "pad2": null, "source-ip": "192.168.1.25", "destinat ion-ip": "17.253.35.206"}}     It's JSON-lint validated output too so that's a bonus. But then syslog-ng gets it's hands on it. I've delved deep into the balabit syslog-ng administration manual and despite adding all of the relevant syslog-ng.conf attributes to prevent syslog-ng adding its own header, syslog-ng can't seem to help itself! On the router sending the logs to the UF, the syslog-ng.conf looks like this:     ** CHOPPED FOR BREVITY ** source s_snort_json { file("/tmp/alerts.json" follow-freq(1) flags(no-parse)); }; destination d_tcp_splunk_forwarder { network("192.168.1.92" template("${MESSAGE}\n") port(1514)); }; log { source(s_snort_json); destination(d_tcp_splunk_forwarder); };     I've tried using the built in json parser with syslog-ng, but it doesn't really work and simply adds to the problem that I don't really want syslog-ng to fiddle with the JSON at all. I just want to send it to the UF as it is. On the receiving UF system, the log is received using syslog-ng again. The syslog-ng.conf on that box looks like this:     ** CHOPPED FOR BREVITY ** source s_network_tcp { network( ip("0.0.0.0") transport("tcp") port(1514) flags(no-parse) ); }; destination d_snort { file("/var/log/snort.json"); }; log { source(s_network_tcp); destination(d_snort); };     Note, the flags(no-parse) and template (which both appear to have no effect) - syslog-ng still adds it's own data! The output now (inexplicably) looks like this in /var/log/snort.json:     Nov 19 03:44:56 192.168.1.1 {"type": "event", "event": {"msg": "ET POLICY iTunes User Agent", "classification": "Potential Corporate Privacy Violation", "sensor-id": 0, "event-id": 354, "event-second": 1605757495, "event-microsecond": 660579, "signature-id": 2002878, "ge nerator-id": 1, "signature-revision": 6, "classification-id": 33, "priority": 1, "sport-itype": 57226, "dport-icode": 80, "protocol": 6, "impact-flag": 0, "impact": 0, "blocked": 0, "mpls-label": null, "vlan-id": null, "pad2": null, "source-ip": "192.168.1.25", "destinat ion-ip": "17.253.35.206"}}     Syslog-ng seems to be like a stubborn child. No matter how carefully you tell it not to do something,  it still does exactly what it wants! Props.conf to the rescue here, right? On the UF, my props.conf looks like this:     [sourcetype=json] KV_MODE = json INDEXED_EXTRACTIONS = json TIME_PREFIX= \"event-second\"\: # I've tried SEDCMD-strip_prefix = s/^[^{]+// here too SEDCMD-strip_prefix = s/^[^{]+//g NO_BINARY_CHECK = true disabled = false pulldown_type = true     In Splunk however, the syslog-ng added header remains. I don't have a reliable way of testing the SEDCMD outputs as the Splunk version seems not to be a GNU syntax compatible sed implementation. Does anyone have any suggestions either for the syslog-ng pipeline conf(s) or in the props.conf where I'm going wrong? (I can't use rsyslog on the router BTW - opkg has no package available). Many thanks and all the best
Hi all, hope you can help address a pretty serious concern I'm having. So I have several scheduled alerts configured on Splunk to run hourly. They run the query every hour checking for the past hou... See more...
Hi all, hope you can help address a pretty serious concern I'm having. So I have several scheduled alerts configured on Splunk to run hourly. They run the query every hour checking for the past hour of events. I've also configured them to not Throttle, but to Trigger "Once". It also sends an email on triggering.   I recently checked the query manually in the Splunk search, and found that there were 2 problems. 1) There were several different results across different hours. Basically I had 1 result from 14:00-15:00 and another result from 17:00-18:00. But I had only one massively delayed email received. And just look at this trigger time! To clarify, this is for the 14:00 alert! Not the 17:00 one! And then it doesn't even include the 17:00 results!  Trigger Time: 17:25:48 HKT on November 16, 2020.   2) I checked the results for an entirely different day and found that I had not received an email about it at all. When I run my query in Splunk search hour by hour, I can definitely see the results, so it's not a problem of my query.  MinTime  LatestTime 11/18/2020 10:39:32 11/18/2020 10:42:25 I know that one possible reason my queries are so delayed is because I have a large number of scheduled searches running (like 100+?) and that affects the queueing but is it really this bad?? How can I just have no emails being sent at all??  I'm really at a loss at how to check this further. I've checked my mailbox settings and confirmed that I haven't blocked or junked any of the emails sent by Splunk. I don't know what my next step should be. Can someone please help? Thank you
Hi everybody, I've been looking for some Splunk cloud "best practices" or "do and don't list", but can't find anything similar. Can anybody share such a document/link, or probably write some useful r... See more...
Hi everybody, I've been looking for some Splunk cloud "best practices" or "do and don't list", but can't find anything similar. Can anybody share such a document/link, or probably write some useful recommendations based on their own experience?
Hello all, I have a requirement below : I'm pushing csv file(not pushing regularly) data to splunk index using splunk forwarder. Using that data need to create a simple dashboard with tables and d... See more...
Hello all, I have a requirement below : I'm pushing csv file(not pushing regularly) data to splunk index using splunk forwarder. Using that data need to create a simple dashboard with tables and dropdowns. So my requirement is when ever i push data, only that data should be shown in dashboard (means latest data) Example,  if i push a csv file on 19th nov that data only should be displayed in table whenever i open that dashboard .  for suppose if i pushed again csv file on 23rd nov then tables should display data only for this date. Here i don't want to change time manually in dashboard for every update. please suggest                     
Hi all! Help write a regular expression. You have to filter by url + filter exclude subnets. For example: example.com/articles/ and exclude subnets 111.222.333.*  222.333.444.*  222.333.*.* 
I have uploaded a csv dataset into Splunk, and have been able to successfully use the dataset addon and pivot my data and create visualizations. I created reports and dashboards. I then decided to ex... See more...
I have uploaded a csv dataset into Splunk, and have been able to successfully use the dataset addon and pivot my data and create visualizations. I created reports and dashboards. I then decided to explore my dataset in the investigative search (the regular search command for Splunk). Once I returned back to my dataset it reads no results found. I adjust the time to include all time still no results found. I then go check the reports and dashboards I created and I get an error no results found for each box there should be a chart. This error happened to me three times before, and I had to recreate all of my work. At the time I thought it was just a bug, not knowing what caused it. I now know I get this error every time I decided to explore my dataset in the investigative search.   Can anyone resolve this?    
when I am running setup than it is showing message as follows: splunk enterprise setup wizard ended prematurely because of an error, your system has not been modified. To install this program at a ... See more...
when I am running setup than it is showing message as follows: splunk enterprise setup wizard ended prematurely because of an error, your system has not been modified. To install this program at a later time, run setup wizard again. Click the finish button to exit the setup wizard.  
I have a search by which I am creating a pie chart.. I want to create it in such a way that when I select/click one of its sector it should show another pie chart with the values/fields in the hierar... See more...
I have a search by which I am creating a pie chart.. I want to create it in such a way that when I select/click one of its sector it should show another pie chart with the values/fields in the hierarchy of that sector? how to do that? is it possible to create two pie charts of that sort?