All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to join two searches for see, same hash exists on the other index as well. Below is my search, the issue is every time I run a search for the same timelimit, I see different resul... See more...
Hello, I am trying to join two searches for see, same hash exists on the other index as well. Below is my search, the issue is every time I run a search for the same timelimit, I see different results. WHY? Basesearch: I've tried to combine results of three different hash fields into one   (index=a sourcetype="a" (hash1=* OR hash2=* OR hash3=*)) | fields hash1, hash2, hash3 | table hash1, hash2, hash3 | eval hash=mvzip(mvzip('hash1','hash2',"|"),'hash3',"|") | fields hash | makemv hash delim="|" | mvexpand hash   From here, I've joined two indexes and both indexes have same field for hash files, so I'm attempting to join hash as the focus. Search seems to work fine   join type=left hash [| search (index=b sourcetype=b hashfile=*) OR (index=c sourcetype=c hashfile=*) | fields hashfile, filename,index | eval hash=hashfile]   Both the search on running individually returns 2k+ results, whereas on combining it, I could see only 1 result in the stats table and on hitting run for the same time limit every time I see different file name WHYYY? Any help would be appreciated, thanks!
Hi All, I've got a generic syslog app which pulls in EVERYTHING in the syslog directory with the sourcetype=syslog-unconfigured inputs.conf     [monitor:///var/log/syslog-ng/*/messages] ind... See more...
Hi All, I've got a generic syslog app which pulls in EVERYTHING in the syslog directory with the sourcetype=syslog-unconfigured inputs.conf     [monitor:///var/log/syslog-ng/*/messages] index = syslog sourcetype = syslog:unconfigured host_segment = 4       This is done so we can catch any new syslog devices that were not configured to go to the correct sourcetype. We have a props.conf that routes data to the right index/sourcetype depending on the hostname. props.conf     # InfoBlox [source::/var/log/syslog-ng/(10.164.55.55|10.9.55.56|prodinfoblox1|prodinfoblox2)/messages] TRANSFORMS-reroute_index = route_to_index_infoblox TRANSFORMS-reroute_sourcetype = route_to_sourcetype_infoblox:file TZ=UTC     transforms.conf     [route_to_index_infoblox] REGEX = . DEST_KEY = _MetaData:Index FORMAT = infoblox [route_to_sourcetype_infoblox:file] REGEX = . DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::infoblox:file     Now the above props.conf with a regex for matching on the host in the source doesn't work. However naming each individually does or with a basic wildcard     # InfoBlox [source::/var/log/syslog-ng/10.164.55.55/messages] TRANSFORMS-reroute_index = route_to_index_infoblox TRANSFORMS-reroute_sourcetype = route_to_sourcetype_infoblox:file TZ=UTC [source::/var/log/syslog-ng/10.9.55.56/messages] TRANSFORMS-reroute_index = route_to_index_infoblox TRANSFORMS-reroute_sourcetype = route_to_sourcetype_infoblox:file TZ=UTC [source::/var/log/syslog-ng/prodinfoblox*/messages] TRANSFORMS-reroute_index = route_to_index_infoblox TRANSFORMS-reroute_sourcetype = route_to_sourcetype_infoblox:file TZ=UTC     I've tried escaping the slashes but that doesn't work either.     # This also doesn't work [source::\/var\/log\/syslog-ng\/(10.164.55.55|10.9.55.56|prodinfoblox1|prodinfoblox2)\/messages]       Anyone have any ideas how to get the regex to work in the source:: stanza? Some of these devices have up to 30 hosts and having it all as a one liner would make things much cleaner. Also I'm aware I can do this in transforms.conf with something like this but then I'd need the source match in two spots which is prone to user error.         [route_to_index_infoblox] SOURCE_KEY = Metadata:Source REGEX = \/var\log\syslog\/(192.168.1.1|192.168.1.2|etc.) DEST_KEY = _MetaData:Index FORMAT = infoblox [route_to_sourcetype_infoblox:file] SOURCE_KEY = Metadata:Source REGEX = \/var\log\syslog\/(192.168.1.1|192.168.1.2|etc.) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::infoblox:file           There has to be something just slightly off with my regex.
There's no time in my log You want to extract the source file date using the INGEST command Source name  /var/log/data_20220507.log How can I add random time after the date over there? i wa... See more...
There's no time in my log You want to extract the source file date using the INGEST command Source name  /var/log/data_20220507.log How can I add random time after the date over there? i want _time = 2022/05/07 11:23:22.2 I would appreciate it if you could tell me the settings of Props.conf transforms.conf
Given json with hashes     | makeresults | eval _raw="{\"yes\":true,\"no\":false,\"a\":{\"x\":0,\"y\":0,\"z\":0},\"c\":{\"x\":1,\"y\":2,\"z\":3},\"d\":{\"x\":1,\"y\":4,\"z\":9}}" | spath   ... See more...
Given json with hashes     | makeresults | eval _raw="{\"yes\":true,\"no\":false,\"a\":{\"x\":0,\"y\":0,\"z\":0},\"c\":{\"x\":1,\"y\":2,\"z\":3},\"d\":{\"x\":1,\"y\":4,\"z\":9}}" | spath     "a", "c", and "d" are nested hashes. There are other fields, "yes" and "no" that are not hashes. What I am trying to do filter out non-hashes and then split into multiple row. Name x y z a 0 0 0 c 1 2 3 d 1 4 9 The tricky part is that the top level field names, "yes", "no", "a", "c", "d" are not constant. However the sub fields "x", "y", "z" are. Thoughts?
any idea of this error information? I can not find a message like this  what is the main issue this error???   2022-04-29 18:11:03, 533+0900 process: 10780 thread: MainThread ERROR [itsi.mi... See more...
any idea of this error information? I can not find a message like this  what is the main issue this error???   2022-04-29 18:11:03, 533+0900 process: 10780 thread: MainThread ERROR [itsi.migration] [filesave_migration_interface:96] [migration_save_single_object_to_kystore] Exception adding inage 5e6eda58d3bc8 f4ad0af: HTTP 409 Conflict -- A document with the same key and user already exists host = ITSIOS source = /opt/splunk/splunk/var/log/splunk/itsi_migration queue.log sourcetype = itsi_internal_log    
Unable to perform the following search provided by Splunk to check forwarder certificate package version: index=_internal source=*metrics.log group=tcpout_connections name=splunkcloud* | stats ... See more...
Unable to perform the following search provided by Splunk to check forwarder certificate package version: index=_internal source=*metrics.log group=tcpout_connections name=splunkcloud* | stats latest(_time) AS _time latest(name) AS name by host | rex field=name "(?<output_group>splunkcloud_202[23456789]\d+)\_" | eval fwd_config=if(isnotnull(output_group),“new”,“legacy”) | stats count by _time host output_group fwd_config | reltime | fields _time reltime host output_group fwd_config | sort 0 fwd_config
Hello. I'm seeing a lot of articles in web searches about turning on https for HEC, but approximately zilch on turning it off. I did find: Whether the HTTP Event Collector server protocol is ... See more...
Hello. I'm seeing a lot of articles in web searches about turning on https for HEC, but approximately zilch on turning it off. I did find: Whether the HTTP Event Collector server protocol is HTTP or HTTPS. 1 indicates HTTPS is enabled; 0 indicates HTTP. The default value is 1. HTTP Event Collector shares SSL settings with the Splunk Enterprise instance and can't have enableSSL settings that differ from the settings on the Splunk Enterprise instance.   We need HEC to run without TLS, and can live with the Web UI not having TLS too if that'll help with HEC. But if I toss: [http] disabled = 0 enableSSL = 0 ...into /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf and restart splunk, then HEC continues to demand https, and /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf is rewritten automatically to: [http] disabled = 0 enableSSL = 1 What do I need to do to make HEC use http, not https? (We realize that https is more secure.  For our production splunk we'll use https, but for our team's development environments it just makes more sense to use http.  I've not discussed why, but I suspect https is proxied somehow)  Thanks!
Greetings. I've been trying to build a correlation search that sets a default disposition value when it runs but so far it doesnt work as advertised.  I've tried this one of two ways:   1) manua... See more...
Greetings. I've been trying to build a correlation search that sets a default disposition value when it runs but so far it doesnt work as advertised.  I've tried this one of two ways:   1) manually setting it to a valid value e.g.     | eval disposition="disposition:7" | eval disposition_label="My cool disposition label"     2) by editing the correlation search in the advanced search editor and setting the parameter myself: I've tried this with and without quotes, but I still always get a disposition:6 set by default.   I have validated that this disposition and label does exist and is enabled in Incident Review settings, and can manually be set. Can someone shine some light on what I'm doing wrong? Many thanks!
Hello, I was able to create dashboards back in 8.1.9. After upgrading to 8.2.6, I get a weird error anytime I try to create a new dashboard or clone one (whether classic or studio). Haven't been ab... See more...
Hello, I was able to create dashboards back in 8.1.9. After upgrading to 8.2.6, I get a weird error anytime I try to create a new dashboard or clone one (whether classic or studio). Haven't been able to find any information on it. Any help is greatly appreciated! V/r, mello920
MS 365 TA has been installed on HF, all input have been configured but client is not getting the teams information. Only 0365 is supported  Microsoft teams TA is not supported by splunk. what can I d... See more...
MS 365 TA has been installed on HF, all input have been configured but client is not getting the teams information. Only 0365 is supported  Microsoft teams TA is not supported by splunk. what can I do to perform the team data pull?
We have a 3rd party pulling AWS logs as far back as AWS holds onto logs. However, we want to be able to go back further so we are looking at our AWS index in Splunk. We want to extract a full export ... See more...
We have a 3rd party pulling AWS logs as far back as AWS holds onto logs. However, we want to be able to go back further so we are looking at our AWS index in Splunk. We want to extract a full export of _raw for the entire index. We have access to the management port of our searchhead which is pointing to an indexer cluster with all of the aws index data - noting that the index is SmartStore enabled. What's the best way to export this programmatically? It would not scale to manually run the search in the GUI and export it. We've looked at the oneshot search with js but it seems to be timing out even though we have baked in pagination. Thanks in advance
I've got a dashboard created with Maps+ plotting the events on the map. The next section is a table of events. I'd like to schedule this to be an automated report, send to some folks. However, the PD... See more...
I've got a dashboard created with Maps+ plotting the events on the map. The next section is a table of events. I'd like to schedule this to be an automated report, send to some folks. However, the PDF reports just shows the table and "PDF Export does not support custom visualizations.   Anyone have any ideas how I could still accomplish what I'm hoping for?
I have a lookup table that lists all users along with their department like so:   email department --------------------------------------- user1@company.com Sales user2@company.com ... See more...
I have a lookup table that lists all users along with their department like so:   email department --------------------------------------- user1@company.com Sales user2@company.com Engineering user3@company.com Accounting user4@company.com Sales user5@company.com HR     I also have an index that list events for a particular application. The index contains lots of fields, but for my purposes, I'm really only interested in _time and actor.email. My goal is to count the number of days per week every user in a given department logs events in the index even if that number is zero.  I can get pretty close to what I want with this search:   index=whatever <base search here> | lookup user.csv email as actor.email OUTPUT department | bin _time span=1d | search department="Sales" | stats count as numEvents by _time, actor.email | eval weekNumber = strftime(_time,"%U") | stats dc(_time) as numDays by actor.email, weekNumber | xyseries actor.email, weekNumber, numDays     The problem with this search is that if there is a user in the lookup table who returned zero events during that time frame, they won't appear in the results.  I considered trying to append [|inputlookup user.csv] to the search, but because my append doesn't include a _time field, I can't get everything to line up correctly.   How do I run a search for every user in the correct department in the lookup table and return zero events per week if they didn't interact with the system?
Hi, I have created a field, "from", which is a concatenation of 2 string fields, as follows: index = ..... | eval time_epoch = strptime('SESSION_TIMESTAMP', "%Y-%m-%d %H:%M:%S") | convert ctime(... See more...
Hi, I have created a field, "from", which is a concatenation of 2 string fields, as follows: index = ..... | eval time_epoch = strptime('SESSION_TIMESTAMP', "%Y-%m-%d %H:%M:%S") | convert ctime(time_epoch) as hour_minute timeformat="%Y-%m-%d %H:%M" | strcat URL_PATH ":" SEQUENCE from | table  from The "from" field is made up of a URL string , a : character and then a number in string format. I need to create another field "to", so that for each Nth event where the respective "from" value ends in the number N, the corresponding "to" has  the URL for the (N+1) event, : and (N+1)th value. Example: from                                                  to ....:1                                                       ......:2 .....2                                                       .......:3 .....:3                                                      .......:4 ........................................ .........N                                                  <BLANK> In this way, the last value of the "from" field would have a blank "to" value. Essentially, I need to slid the "from" values up by 1 and name this other field as "to". I have tried Regex and different eval combinations but no success. Can you please help? Many thanks, P
Is it possible to pull in flow logs from an S3 bucket? The IAM role has been created but I'm not sure the data is being retrieved/parsed accurately. There was no input option for S3 when using the AW... See more...
Is it possible to pull in flow logs from an S3 bucket? The IAM role has been created but I'm not sure the data is being retrieved/parsed accurately. There was no input option for S3 when using the AWS add-on to pull in VPC flow logs(Only Kinesis or Cloudwatch). Can the input be configured manually or do we have to change where the vpc flow logs are stored? 
Hello my fellow Splunkers, i am trying to use a second index as a lookup for a field in the first index index=products contains the products serialNumbers1 index=inventory contains the products s... See more...
Hello my fellow Splunkers, i am trying to use a second index as a lookup for a field in the first index index=products contains the products serialNumbers1 index=inventory contains the products serialNumbersAll and productsNames serialNumbers1 is a subset of serialNumbersAll i need to table serialNumbers1 and the equvelant productsNames example: (index=products OR index=inventory) |table serialNumbers1 serialNumbersAll productsNames we get serialNumbers1 serialNumbersAll productsNames 111 222 333 444                                    111                           apple                                     222                          orange                                     333                          banana                                     444                          kiwi                                     555                                     666                                     777                                     888                                  the desired output is serialNumbers1 serialNumbersAll productsNames 111                                                              apple 222                                                              orange 333                                                              banana 444                                                              kiwi                                111                               apple                                 222                              orange                                333                               banana                                444                               kiwi                                555                               lemon                                666                               vege                                777                               potatoes                                888                               sweet potatoes notes: i have a huge set of data more than 200K so using eventstats is not an option as it hits the limit, increasing the limit is not an option also using a lookup table is not an option for me as well
I will be the first to admit I am by no means even a novice in SPLUNK. I am trying to fix an issue that was recently created due to the need to update a service account password that is associated wi... See more...
I will be the first to admit I am by no means even a novice in SPLUNK. I am trying to fix an issue that was recently created due to the need to update a service account password that is associated with SPLUNK. We recently changed the password to the account that runs the splunkd service.  the service started back up without any issues, however when I attempt to log into the SPLUNK webapp I get an unauthorized error. It seems like an obvious authentication issue but due to my lack of knowledge with SPLUNK and how it is setup I am not even sure where to begin looking.
Have a requirement to get Cisco AMP events into Splunk Cloud.  For Splunk Enterprise, I use python, but with no access to the back-end, how is it done in Cloud?  Their is no "Cisco AMP" TA, so at a l... See more...
Have a requirement to get Cisco AMP events into Splunk Cloud.  For Splunk Enterprise, I use python, but with no access to the back-end, how is it done in Cloud?  Their is no "Cisco AMP" TA, so at a loss (for the moment). 
Hello all, We receive the "splunkd.log" from every Universal Forwarder into our "_internal" index.  There are some events with log_level=ERROR that I need to analize, some of them are related to Po... See more...
Hello all, We receive the "splunkd.log" from every Universal Forwarder into our "_internal" index.  There are some events with log_level=ERROR that I need to analize, some of them are related to PowerShell script execution errors. The issue with this events is that the script outputs the error in several lines and the event is splitted in multiple events, all of them with the same "_time" (in the image below, the field "a_fechahora" is = _time)   I was able to merge the "a_mensaje" rows by "_time", but there are some issue with the order of the rows: E.g.  As you can see in green, the "Co" statement is incomplete, and it continues some lines below with the "mmandNotFoundException". Same happens with "or if a pat" (...) "h was included" Is this a common / known issue? Is there any way to prevent this messed lines in powershell outputs? Regards,
The advisory (https://www.splunk.com/en_us/product-security/announcements/svd-2022-0502.html) talks about Splunk Enterprise, but makes no mention of the Universal forwarder. Since UF has many of the... See more...
The advisory (https://www.splunk.com/en_us/product-security/announcements/svd-2022-0502.html) talks about Splunk Enterprise, but makes no mention of the Universal forwarder. Since UF has many of the same API features as Enteprise, and I do see verboseLoginFailMsg = true when running the btool utility, my assumption is that the UF is also vulnerable. Can someone confirm: 1. If my assumption is correct 2. If the same mitigation can be performed (so we can use deployment server to resolve) 3. Which version of UF is not vulnerable.   Thanks, Gord T.