All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi all,  newbee question here but i can't seem to find an answer.   I am trying to create a timechart table grouped table that has product as the parent and error codes as the child.     I can't... See more...
hi all,  newbee question here but i can't seem to find an answer.   I am trying to create a timechart table grouped table that has product as the parent and error codes as the child.     I can't seem to get any sort of multiple column grouping where i can have separate calculations for each column variant.   so the structure would losely be:                                      Product1                                                                                                                                Product2                200         201       300       ......and so on                           200         201       300       303         ......and so on date1      34             23      22                                                                 43           23         76         23 date2        2             43        45   .....and so on.   it would be fine to coalesce the prod and error code , but i can figure that out either i.e. 200Prod1    201Prod1     200Prod2    201Prod2   Is this something that is logically doable?
I have an attribute that is determined by two inputs, one with many possible values, the other few.  I can enlist them in a giant joint table for automatic lookup, but building and maintaining the ta... See more...
I have an attribute that is determined by two inputs, one with many possible values, the other few.  I can enlist them in a giant joint table for automatic lookup, but building and maintaining the table would be tedious.  I can put all criteria in one giant eval, but maintenance is a worse nightmare.  The most maintainable way to do this would be to lookup a map with the numerous input, then calculate with the sparse input.  The challenge is to do this automatically for all searches. Because automatic lookup is performed after calculated field, I cannot access the lookup map as a new field.  My question is whether there is a way to explicitly invoke a lookup inside eval, like   EVAL-new_attrib = if(lookup(map1, value_of_many) == "X", xvaluefunct(value_of_few), elsefunct(value_of_few))   Here, new_attrib is the attribute I wanted, value_of_many and value_of_few are the two inputs.  I'm hoping that lookup(map1, value_of_many) would give me an output as if I use    | lookup map_of_many value_of_many OUTPUT map1 | new_attrib = if(map1 == "X", xvaluefunct(value_of_few), elsefunct(value_of_few))    
I've got a dedicated Heavy Forwarder that I am trying to use to ship logs out via syslog: outputs.conf   [syslog:outgoing] server = receiver.vm.com:5140 type = tcp priority = <110> maxEventSize = ... See more...
I've got a dedicated Heavy Forwarder that I am trying to use to ship logs out via syslog: outputs.conf   [syslog:outgoing] server = receiver.vm.com:5140 type = tcp priority = <110> maxEventSize = 25600   But I also want to include Splunk metadata fields in the event as it gets shipped: props.conf   [host::*] TRANSFORMS-Syslog_Items = \ Syslog_Epoch, Syslog_SourceType, Syslog_Source, Syslog_Host, Syslog_Index priority = 1    transforms.conf   [Syslog_Index] SOURCE_KEY = _MetaData:Index REGEX = ^(.*)$ FORMAT = toindex=$1 $0 DEST_KEY = _raw [Syslog_Host] SOURCE_KEY = MetaData:Host REGEX = ^host::(.*)$ FORMAT = sourcehost=$1 $0 DEST_KEY = _raw [Syslog_SourceType] SOURCE_KEY = MetaData:Sourcetype REGEX = ^sourcetype::(.*)$ FORMAT = sourcetype=$1 $0 DEST_KEY = _raw [Syslog_Source] SOURCE_KEY = MetaData:Source REGEX = ^source::(.*)$ FORMAT = source=$1 $0 DEST_KEY = _raw [Syslog_Epoch] SOURCE_KEY = _time REGEX = ^(.*)$ FORMAT = epoch=$1 $0 DEST_KEY = _raw     All this works for most data: Example Data that worked:   <110> generichostname toindex=os sourcehost=generichostname source=df sourcetype=df epoch=1621371418 Filesystem...     But I've come to realize that structured data (specifically _json sourcetype data) doe not work: Example Data that failed:   <110> generichostname {"hostname": "generichostname", "ipaddress": "10.x.x.x"}     I have been trying different modifications to the _json sourcetype configuration, and I even went so far as to erase the _json sourcetype altogether, but nothing works. If the data is json, then the transforms simply do not get applied. How do I fix this? Again, this is a dedicated Heavy Forwarder with the sole duty to ship out syslog. Universal Forwarders are going to be optionally given this HF as an output destination for any logs that we want shipped out via syslog. So I don't care how badly or how weirdly I change the parsing configs on this Splunk instance. I just want to indiscriminately insert metadata in front of ALL logs that this HF receives and ships out. Any insight would be very appreciated! Thanks in advance!
Greetings, I've installed the Cisco eStreamer eNcore for Splunk Add-On and the setup option is missing to input the FMC Hostname, Port and PKCS12 info . I have removed and installed several times. H... See more...
Greetings, I've installed the Cisco eStreamer eNcore for Splunk Add-On and the setup option is missing to input the FMC Hostname, Port and PKCS12 info . I have removed and installed several times. Has anyone else come across this issue? Thanks in Advance, ~John
I read the documentation on the mandatory SimpleXML dashboard tag version=1.1 which is required by Splunk Enterprise 8.2 and Splunk Cloud. According to the Simple XML reference, Splunk Enterprise 8.... See more...
I read the documentation on the mandatory SimpleXML dashboard tag version=1.1 which is required by Splunk Enterprise 8.2 and Splunk Cloud. According to the Simple XML reference, Splunk Enterprise 8.1.x gives a warning when it sees the 1.1 version tag: Splunk Enterprise version 8.1.x and lower do not recognize this attribute and give a warning. My question is: How can I create an app that works on all currently supported versions of Splunk Enterprise and Cloud without errors or warnings?
I am running splunk with a free license in a docker container on a ubuntu system.  I am using the MLTK and also want to use the Deep Learning Toolkit.  The install/configure instructions seem to assu... See more...
I am running splunk with a free license in a docker container on a ubuntu system.  I am using the MLTK and also want to use the Deep Learning Toolkit.  The install/configure instructions seem to assume that splunk is running directly on the machine and communicating with the docker daemon. Is it possible to run splunk in docker and use the DLTK?  How is it configured?
Problem: I want to ignore all results from search that have message: <4 digits> in them. For example: { timestamp: 2021-05-17T22:30:06.299Z, level: error, message: 9173 } Research done: I have look... See more...
Problem: I want to ignore all results from search that have message: <4 digits> in them. For example: { timestamp: 2021-05-17T22:30:06.299Z, level: error, message: 9173 } Research done: I have looked into Splunk docs I tried implementing NOT regex "message: \d{4}" and "NOT rex "message: \d{4}" but it did not work.  
Hi Everyone, I have raw logs in this format: 101107134,2105180551,210518,101107236,gcp_salesforce_user_default_opportunity,cstonedb3,0,E3 101107133,2105180545,210518,101107236,gcp_salesforce_user_... See more...
Hi Everyone, I have raw logs in this format: 101107134,2105180551,210518,101107236,gcp_salesforce_user_default_opportunity,cstonedb3,0,E3 101107133,2105180545,210518,101107236,gcp_salesforce_user_role,cstonedb3,50,E3 I have extracted the field with the below query index=abc source="/splunkLogs/JOB_MDJX_CS_EXTR_STATS_PLATINUM.csv"| append [search source="/splunkLogs/JOB_MDJX_CS_EXTR_STATS_GOLD.csv"] |append [search source="/splunkLogs/JOB_MDJX_CS_EXTR_STATS_SILVER.csv"]| eval fields=split(_raw,",") |eval Environment=mvindex(fields,7)|eval RunDate=mvindex(fields,2)|eval JOBFLOW_ID=mvindex(fields,0) |eval SRC_NM=mvindex(fields,4)|eval REC_COUNT=mvindex(fields,6)|search REC_COUNT!="REC_COUNT" |where Environment="E3"|where JOBFLOW_ID LIKE "%%"|eval Run_Date=strptime(RunDate,"%Y%m%d")|table Run_Date The one highlighted in bold is rundate. I need to add 20 in front of this to make it 20210518. Can someone guide me how can I achieve that in my query.
Hello guys! Does anyone know  how I can get (raw data | raw log) from a dataset on Enterprise Security? On Splunk Enterprise I usually use (| eval Log=_raw) and when I try to do the same on ES, I ge... See more...
Hello guys! Does anyone know  how I can get (raw data | raw log) from a dataset on Enterprise Security? On Splunk Enterprise I usually use (| eval Log=_raw) and when I try to do the same on ES, I get empty response. Thanks in advance for any help.  Cheers!
I wrote a program that uses the Splunk API to modify alerts.  I tested this on one of my alerts and it worked fine.  When I ran this against an alert that was owned by a splunk service account it cre... See more...
I wrote a program that uses the Splunk API to modify alerts.  I tested this on one of my alerts and it worked fine.  When I ran this against an alert that was owned by a splunk service account it created a new alert that was identical (identical name too) to the one I wanted to modify except the owner is now me.   In the API call I only provided the name of the alert and the section of the alert I wanted changed.   It looks to have duplicated the rest from the existing alert.   I also used a token associated with my username.     When I look in the alerts menu I only see the original alert.   I need to use a search query (|rest/servicesNS/-/-/saved/searches | search alert.track=1...) to see the other alert.   1)  How do I delete the duplicate alert?  It does not appear under my alerts either.  I don't have access to the machine.  I could request help from someone that does though.       2)  To modify an alert owned by someone else through the API do I need a token generated from their account?  I can edit the alert through the GUI. Thanks. Scott
pushed apps to the search head cluster to 3 Searchheads 2 of them working good but one searched URL is not accessible and cannot run searches. The searchhead appeared to be in restarting loop, logs ... See more...
pushed apps to the search head cluster to 3 Searchheads 2 of them working good but one searched URL is not accessible and cannot run searches. The searchhead appeared to be in restarting loop, logs as below;  
This should be something simple to figure out, but I can't get it to work.  I want to extract username from Message field of Sec Event Log Message=NPS Extension for Azure MFA: C... See more...
This should be something simple to figure out, but I can't get it to work.  I want to extract username from Message field of Sec Event Log Message=NPS Extension for Azure MFA: CID: 6gof474f-4g9d-894f-asb-9abffedxs618 : Access Accepted for user Barry.Allen@LexLIndustries.org with Azure MFA response: Success and message: session r334r562-cf4f-7584-afc5-essdfs4dd67   I want to pull the email address after 'user' in message and assign it to a field. Any help appreciated. 
Hi there, I'm just a basic user of Splunk in my company and I have 0 experience with programming or SQL please don't go rough on me. I'm creating queries for extracting data for specific entrie... See more...
Hi there, I'm just a basic user of Splunk in my company and I have 0 experience with programming or SQL please don't go rough on me. I'm creating queries for extracting data for specific entries. For example:     |search LOTID = 19ADET165T0 OR 19ADET1E666 OR 19ADET1E838 OR 19ADET1FT58 OR 19ADET1G341 OR 19ADET1G511 OR 19ADET1G896 OR 19ADET1G898 OR 19ADET1G90T     Which with additional conditions will return results for these entries.  Following this method I can only create single query for maximum of 1000 entries.  Is there any way which I (basic user) can follow to get around this limitation and create succesful query for e.g. 5000 entries using |search?
We want to implement Splunk cloud , do we need to implement IDM Our data would come from the Azure Cloud and our Data center. What is the recommended use case for this case
Hello Roberto,   Thanks for checking out Splunk Cloud! Unfortunately, there was a system error that prevented your Splunk Cloud Free trial from being created. Try requesting again at Splunk.com.
I'm working with a data source that has two different versions.  In one version the information is double quoted while the other version is single quoted.  This is causing me issues because the singl... See more...
I'm working with a data source that has two different versions.  In one version the information is double quoted while the other version is single quoted.  This is causing me issues because the single quoted information will still have the single quotes while the double quoted won't have any quotes.  This is throwing counts off since the single quoted string and the unquoted string are not the same.  Without using SEDCMD since we want the actual raw source, I've been trying to work out how to do this.  I've got this in a search search | eval raw=_raw | rex field=raw mode=sed "s/\'/\"/g" | rex field=raw "\[(?<audit_event>[^\:]+)\:(?<vendor_severity>[^\:]+).+(?<vendor_xml>\<vendor.+\<\/vendor\>)" Now I'm trying to convert it to props and transforms.  My props.conf EXTRACT-vendor_raw = (?<raw>^.*$) REPORT-vendor_extract_fields = vendor_replace_single_quotes, vendor_fields KV_MODE = xml My transforms.conf [vendor_replace_single_quotes] [vendor_fields] REGEX = \[(?<audit_event>[^\:]+)\:(?<vendor_severity>[^\:]+).+(?<vendor_xml>\<vendor.+\<\/vendor\>) SOURCE = raw What I can't figure out is how do the replace like in the search either in props.conf or transforms.conf.  Everything I've found uses the SEDCMD.  Any thoughts on this? TIA, Joe
I'm currently trying to create a search that look for employees hired within the last 3 months, but I keep getting all of the results. Here's the code:       | where _time >= relative_time(now(),... See more...
I'm currently trying to create a search that look for employees hired within the last 3 months, but I keep getting all of the results. Here's the code:       | where _time >= relative_time(now(),"-3mon") | eval HR_STATUS=case(CGH_RITS_EMP_STAT="TE", "Terminated", CGH_RITS_EMP_STAT="AC","Active",CGH_RITS_EMP_STAT="LE", "Leave of Absence", CGH_RITS_EMP_STAT="PA", "Paid Leave of Absence", true(), "Other") | eval TYPE=case(CGH_RITS_EMP_CODE="E", "EMPLOYEE", CGH_RITS_EMP_CODE="C","CONSULTANT", true(), "Other") | eval Date=(strptime(ORIG_HIRE_DT,"%Y%m%d")) | eval Hire_Date=strftime(Date,"%m/%d/%Y") | table CGH_SOE_ID, FIRST_NAME, LAST_NAME, JOBTITLE, TYPE, EMAIL_ADDR, Hire_Date, ORIG_HIRE_DT, HR_STATUS | rename CGH_SOE_ID AS SOE_ID, EMPLID AS GEID | sort Hire_Date        
  Hello guys I hope everyone is doing well I would like to know is there is a way to code in Splunk how to show the earliest and latest event based on a datetime reference. In the Hotel chain whe... See more...
  Hello guys I hope everyone is doing well I would like to know is there is a way to code in Splunk how to show the earliest and latest event based on a datetime reference. In the Hotel chain where I work we deal with a lot of people who fraudulently create new accounts and use other people's credit cards or names in order to make a reservation and in our system we have a way to set up a "fraud alarm" what I'd like to do is find a way to create a table where I can see the date of the fraud alarm and the very call after and before that fraud alarm, please allow me to illustrate I can have a table like this one: FRAUD_DATE ID RISK 2021-01-14 15:30:12 1 AGT 2021-02-03 18:46:59  1 JUI 2021-02-05 17:16:23  1 JUI 2021-11-26  21:15:14 3 AGT   and also a table like this:  ID RESERVATION DATE 1 UJS01 2021-01-13 15:30:12 1 UJS02 2021-01-14 16:30:12 2 UJS03 2021-02-03 18:46:59  1 UJS04 2021-02-06 14:00:23  2 UJS05 2021-02-03 18:46:59    This will be the table I will be looking for: all IDs that have made a reservation  and also have a fraud alarm and I have the date of the fraud alarm (related to that ID)  with the date of the reservation right before the fraud alarm and the date of the reservation right after the fraud alarm  ID RESERVATION_CODE_BEFORE DATE_REV_BEFORE FRAUD_ALARM DATE_REV_AFTER RESERVATION_CODE_AFTER 1 UJS01 2021-01-13 15:30:12 2021-01-14 15:30:12 2021-01-14 16:30:12 UJS02 1 UJS02 2021-01-14 16:30:12 2021-02-03 18:46:59  2021-02-06 14:00:23  UJS04 1 UJS02 2021-01-14 16:30:12 2021-02-05 17:16:23  2021-02-06 14:00:23  UJS04   If there is no prior or post date them N.A will be the best chose to tabulate, thank you so much to anyone that can help me out. These are the codes that I can use to extract the information: Fraud Alarms: index="fraud" | search "fraud_alarms"=* | fields ID, DATE_FRAUD, RISK Reservations index="cx_reserv" | search "pay_ok"=* | fields ID, date_reser, code_rev   Kindly, Cindy PS: I was wondering if perhaps one could group all the reservation dates by ID and fraud date and then subtract all reservation  dates to the fraud date  and create a field with the result named "value" and after that select as the "very before reservation date" the one with the greatest negative value and as the very next reservation date the one with the lowest positive value..  but I dont know is splunk has its own way to do this is a faster way perhaps thank you so much everyone  
i would need help splitting this output into its own line. if we can even remove the quotes, comma, curly brackets and square brackets would be icing on top. ie: 10879053', {'duration': [57.0... See more...
i would need help splitting this output into its own line. if we can even remove the quotes, comma, curly brackets and square brackets would be icing on top. ie: 10879053', {'duration': [57.005] 10879054', {'duration': [133.012] 10879056', {'duration': [100.212] CustomResults="{packet_duration {['10879053', {'duration': [57.005]}, '10879054', {'duration': [133.012]}, '10879055', {'duration': [43.491]}, '10879056', {'duration': [100.212]}, '10879057', {'duration': [76.366]}, '10879214', {'duration': [44.759]}, '10879215', {'duration': [88.866]}, '10879463', {'duration': [57.006]}, '10879464', {'duration': [73.37]}, '10879466', {'duration': [94.44]}, '10907774', {'duration': [55.866]}, '10907850', {'duration': [11.561]}, '10907851', {'duration': [30.743]}, '10907852', {'duration': [104.38]}, '10908319', {'duration': [71.383]}, '10960046', {'duration': [0.16]}, '10960047', {'duration': [47.96]}, '10960097', {'duration': [107.256]}, '10960131', {'duration': [60.729]}, '10960153', {'duration': [96.39]}, '10960210', {'duration': [44.168]}, '10960211', {'duration': [43.384]}, '1108114', {'duration': [83.468]}, '1134822', {'duration': [66.569]}, '115226', {'duration': [40.005]}, '1164456', {'duration': [70.144]}, '120397', {'duration': [91.127]}, '1328915', {'duration': [41.269]}, '133417', {'duration': [72.306]}, '13537782', {'duration': [66.259]}, '13537830', {'duration': [47.595]}, '13537845', {'duration': [48.153]}, '13537846', {'duration': [48.825]}, '13537888', {'duration': [20.178]}, '13537889', {'duration': [47.347]}, '13537898', {'duration': [59.009]}, '1359649', {'duration': [76.475]}, '1412977', {'duration': [131.558]}, '14865730', {'duration': [45.554]}, '14865745', {'duration': [50.925]}, '14865746', {'duration': [41.368]}, '14865828', {'duration': [161.74]}, '14865830', {'duration': [94.507]}, '14865844', {'duration': [45.727]}, '14865845', {'duration': [95.748]}, '14865959', {'duration': [66.047]}, '14865960', {'duration': [77.699]}, '1665884', {'duration': [117.798]}, '1723748', {'duration': [111.325]}, '1760213', {'duration': [100.546]}, '1769863', {'duration': [88.724]}, '1851710', {'duration': [116.289]}, '1853478', {'duration': [6.64]}, '1853480', {'duration': [49.312]}, '1886160', {'duration': [59.463]},
Hi, New to splunk first time lister. Hoping for some help. I am trying to extract nested JSON data from a Widows Event log message in splunk. This works (up to a point): index="someindex" host="Ah... See more...
Hi, New to splunk first time lister. Hoping for some help. I am trying to extract nested JSON data from a Widows Event log message in splunk. This works (up to a point): index="someindex" host="Ahost1" | spath input=Message Its great, except one the of the Json fields is called 'JSON_ArrayUsers' containing UPNs of users. Sometimes it contains a single user, sometimes more than one user. When more than one user Splunk calls the field this: JSON_ArrayUsers{} and when just a single user is listed it names the field this: JSON_ArrayUsers This makes searching the field difficult as its called two different things. Its there an easy way to stop the {} appearing - i already know its an array! Thanks, Pete