All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm attempting to find file downloads within a 2 minute timespan following a browser being spawned from outlook (my subsearch). Everything works find (the search andsubsearch) until I add the regex c... See more...
I'm attempting to find file downloads within a 2 minute timespan following a browser being spawned from outlook (my subsearch). Everything works find (the search andsubsearch) until I add the regex command limiting the filepath to the downloads folder.  I'm getting the error "Error in 'SearchOperator:regex': Usage: regex <field> (=|!=) <regex>." Can anyone help me understand why the regex command is throwing it off? I think it's because it's taking the subsearch as part of the regex syntax but I don't know how to separate the two.  Search: index=random_index event_simpleName=*FileWritten | regex TargetFileName="^[\WD]\w*\S*\W(?:Users)\W\w+\.\w+\W(?:Downloads)\W\w+" [search index=random_index* sourcetype=stuff event_simpleName=ProcessRollup* ParentBaseFileName=OUTLOOK.EXE ImageFileName IN (*firefox* *chrome* *edge*) CommandLine IN (*sharepoint.com*) NOT CommandLine IN (*vendor*) | rename _time AS earliest | eval latest=relative_time(_time,"+5min@min") | table aid earliest latest | format] | table _time aid TargetFileName 
I have lookup table like  Date       ID              Name   02/04    12547     xxx 02/04     12458    xxx 02/04      14587    xxx I am running basesearch | table Date ID Name  Date        ID... See more...
I have lookup table like  Date       ID              Name   02/04    12547     xxx 02/04     12458    xxx 02/04      14587    xxx I am running basesearch | table Date ID Name  Date        ID           Name 02/08  12547     xxx 02/08   12458    xxx 02/08   45896    xxx 02/08   47523    xxx I want to compare my search table to lookup table and output the not match result to my search table. I try selfjoin it didnot work. Is any otherway I can do?
for splunk cloud how do we extract multiple values for one field for one entry
Hi Team, In my environment we made the Deployment Server as License Master as well. Current Architecture 1 - Search Head 1 - Indexer  1 - Deployment Server 2 - Universal Forwarders I am p... See more...
Hi Team, In my environment we made the Deployment Server as License Master as well. Current Architecture 1 - Search Head 1 - Indexer  1 - Deployment Server 2 - Universal Forwarders I am planning to implement Master as well for the current architecture. Will my architecture requires Heavy Forwarder? Addition to the above, If I am implementing HF is there an impact and also like to understand how the deployment pipeline works
Can accelerated data models handle pre-summarized data accurately?  Take authentication messages for example.  Most apps and operating systems will send one message per authentication attempt.  But I... See more...
Can accelerated data models handle pre-summarized data accurately?  Take authentication messages for example.  Most apps and operating systems will send one message per authentication attempt.  But I've seen some solutions that will send a digest every five minutes or whatever showing the count of success/failed messages by user. Are there settings that control how accelerated data models calculate the "count" if the messages contain a "count=X" field in it?  And if so, can an accelerated data model handle a blend of raw and summarized events?
I am in the process of trying to create a search to identify when clients have MFA enabled / disabled. the purpose of this search is to include date, time, device info, geo location.... basically eve... See more...
I am in the process of trying to create a search to identify when clients have MFA enabled / disabled. the purpose of this search is to include date, time, device info, geo location.... basically everything needed so if a account was compromised or MFA was disabled longer then what's allowed you I can document and take appropriate action with client.       Index=* source=security_logs AND incident_id=0365 AND action=disable AND authentication_type=MFA time>=24h | table user, ip_address, inbound/outbound, time, date, duration | iplocation ip_address | stats latest(time_enabled) as time_enabled | eval days_disabled=round((now()-time_enabled)/(60*60*24)) | table user, ip_address, inbound/outbound, time, date, duration, ip_city, ip_country, days_disabled
So far I haven't been able to find an answer that quite fits my problem. I have events for an app coming in as deliminated text, and have extracted the fields based on the deliminators. For example... See more...
So far I haven't been able to find an answer that quite fits my problem. I have events for an app coming in as deliminated text, and have extracted the fields based on the deliminators. For example, my events look like this (deliminated by '@'): @ Field1 @ Field2 @ Field3 @ Field4 I am getting these events over UDP from an old system with a proprietary OS. Out of my control, the software sending the information is splicing one of the deliminated values if the value has more than 256 characters. This value is a log message from the system. In cases where this happens, I always get two events; one event with the spliced value containing the first 256 characters, and another event with the rest of the message.  So, very simply, the events coming into Splunk look a lot like this: @ Timestamp @ ProcessInformation.Process @ <Null> @ Message: Help. Reason: This is going to be cut out at the 256th c @ Timestamp @ ProcessInformation.Process @ 1 @ haracter. I've extracted fields based on the deliminators, but I also need to extract fields from the spliced message. This is making it tricky when the message is larger than 256 characters, because a field I need to extract is sometimes spliced across 2 messages. When the value is spliced, both events contain the same timestamp exactly, to 6 digits of a second. Also, since I am extracting fields based on the deliminator, the spliced message is always extracted as the same field, whether it's the first or second part of the message. Is there any way I can extract fields from this message across these two events, such that I get two fields: Message: Help Reason: This is going to be cut out at the 256th character. Even better, is there any way I can combine these events so that I can get something like this? @ Timestamp @ ProcessInformation.Process @ 1 @ Message: Help. Reason: This is going to be cut out at the 256th character. Any advice is appreciated, thank you.
Hi team, I have 14 records in the table,  I want to find out average of first 7 and average of last 7 records How can I do that? Here I want avg of first 7 entries and then average of last 7 en... See more...
Hi team, I have 14 records in the table,  I want to find out average of first 7 and average of last 7 records How can I do that? Here I want avg of first 7 entries and then average of last 7 entries  
We have recently upgraded our splunk from 8.0.2 to 9.0.4.  The SH cluster members are giving the message - "KV Store is running an old version, service(36). See the Troubleshooting Manual on KV Store... See more...
We have recently upgraded our splunk from 8.0.2 to 9.0.4.  The SH cluster members are giving the message - "KV Store is running an old version, service(36). See the Troubleshooting Manual on KV Store upgrades for more information." We have followed the steps defined the KV migration link -https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/MigrateKVstore?ref=hk  but the version doesn't seem to upgrade.  It is always showing the existing version.   Current  serverVersion : 3.6.17 storageEngine : wiredTiger
I have a column that holds OS Name along with it's version details.  os_full_name CentOS Linux release 7.1.1503 (Core) CentOS Linux release 7.2.1511 (Core) CentOS Linux rele... See more...
I have a column that holds OS Name along with it's version details.  os_full_name CentOS Linux release 7.1.1503 (Core) CentOS Linux release 7.2.1511 (Core) CentOS Linux release 7.4.1708 (Core) Fire Linux OS 6.2.0-42 Microsoft Windows Server 2008 R2 Enterprise Version 6.1.7601 Build 7601 Microsoft Windows Server 2012 R2 Datacenter Version 6.3.9600 Build 9600 Microsoft Windows Server 2012 R2 Standard Version 6.3.9600 Build 9600 Microsoft Windows Server 2016 Red Hat Enterprise Linux Server release 6.3 (Santiago) Red Hat Enterprise Linux Server release 6.9 (Santiago) SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 4 SUSE Linux Enterprise Server 12 (x86_64) VERSION = 12 PATCHLEVEL = 3 Ubuntu 14.04.3 LTS Ubuntu 16.04 LTS Ubuntu 16.04.3 LTS VMware ESXi vmnix-x86 5.5.0   I want to split this column so that I get OS name and Version details in separate fields to get something like -  os_full_name os_name os_version CentOS Linux release 7.1.1503 (Core) CentOS Linux release 7.1.1503 (Core) CentOS Linux release 7.2.1511 (Core) CentOS Linux release 7.2.1511 (Core) CentOS Linux release 7.4.1708 (Core) CentOS Linux release 7.4.1708 (Core) Fire Linux OS 6.2.0-42 Fire Linux OS 6.2.0-42 Microsoft Windows Server 2008 R2 Enterprise Version 6.1.7601 Build 7601 Microsoft Windows Server 2008 R2 Enterprise Version 6.1.7601 Build 7601 Microsoft Windows Server 2012 R2 Datacenter Version 6.3.9600 Build 9600 Microsoft Windows Server 2012 R2 Datacenter Version 6.3.9600 Build 9600 Microsoft Windows Server 2012 R2 Standard Version 6.3.9600 Build 9600 Microsoft Windows Server 2012 R2 Standard Version 6.3.9600 Build 9600 Microsoft Windows Server 2016 Microsoft Windows Server 2016   Red Hat Enterprise Linux Server release 6.3 (Santiago) Red Hat Enterprise Linux Server release 6.3 (Santiago) Red Hat Enterprise Linux Server release 6.9 (Santiago) Red Hat Enterprise Linux Server release 6.9 (Santiago) SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 4 SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 4 SUSE Linux Enterprise Server 12 (x86_64) VERSION = 12 PATCHLEVEL = 3 SUSE Linux Enterprise Server 12 (x86_64) VERSION = 12 PATCHLEVEL = 3 Ubuntu 14.04.3 LTS Ubuntu 14.04.3 LTS Ubuntu 16.04 LTS Ubuntu 16.04 LTS Ubuntu 16.04.3 LTS Ubuntu 16.04.3 LTS VMware ESXi vmnix-x86 5.5.0 VMware ESXi vmnix-x86 5.5.0   Can someone please help with this?
I would like to add all instances of a field within the same variable, named SynchronousExecution. Is there a better way to include all fields of the same type?     | eval QPACAWTE_0001 = if(is... See more...
I would like to add all instances of a field within the same variable, named SynchronousExecution. Is there a better way to include all fields of the same type?     | eval QPACAWTE_0001 = if(isNull(QPACAWTE_0001), 0, QPACAWTE_0001) | eval QPACAWTE_0002 = if(isNull(QPACAWTE_0002), 0, QPACAWTE_0002) | eval QPACAWTE_0003 = if(isNull(QPACAWTE_0003), 0, QPACAWTE_0003) | eval QPACAWTE_0004 = if(isNull(QPACAWTE_0004), 0, QPACAWTE_0004) | eval QPACAWTE_0005 = if(isNull(QPACAWTE_0005), 0, QPACAWTE_0005) | eval SynchronousExecution = QPACAWTE_0001 + QPACAWTE_0002 + QPACAWTE_0003 + QPACAWTE_0004 + QPACAWTE_0005       Thanks
Good Afternoon. My organization uses Cisco AnyConnect. We're concerned that some users may be connecting to our network from out of the country. We'd like to get alerts as to when this happens.  ... See more...
Good Afternoon. My organization uses Cisco AnyConnect. We're concerned that some users may be connecting to our network from out of the country. We'd like to get alerts as to when this happens.  Has anyone ever set up such alerts before? I would like some guidance. I know the Public IP of our network, however I'm not sure how to drill down in our logs to identify the inbound traffic to it. If I try to list the IP address as the source IP in one way or another, the connections turn up blank. Does anyone have any suggestions? Thanks. 
I want to create a default search filter for ALL users that go into ES Incident Review.  You can create a new filter but this I believe gets saved in your profile...  I go into /splunk/etc/users and ... See more...
I want to create a default search filter for ALL users that go into ES Incident Review.  You can create a new filter but this I believe gets saved in your profile...  I go into /splunk/etc/users and I would think the app that supports incident review would have it but nope.  Ideas?  
Hi, We already have a case open but wondering if someone else ran into this problem. Randomingly Scheduled Searches are losing the original owner and defaulting to Other This causes the cron schedu... See more...
Hi, We already have a case open but wondering if someone else ran into this problem. Randomingly Scheduled Searches are losing the original owner and defaulting to Other This causes the cron scheduled to be running at the wrong times, since the timezone defaults to the system timezone.   Does anyone have any experience in figuring out why the owner is being changed to Other? We use SAML.   Thanks  Chris
Hello! So I'm trying to write a rex expression to pull out a specific bit of data from this: <plugin_output>Operating system version = 10.19044 Architecture = x64 Build lab extended = 19041.1.amd6... See more...
Hello! So I'm trying to write a rex expression to pull out a specific bit of data from this: <plugin_output>Operating system version = 10.19044 Architecture = x64 Build lab extended = 19041.1.amd64fre.vb_release.191206-1406 </plugin_output> Specifically I want to extract the Operating System Version as a new field, "Win10Build", but I want only everything after the period, so in this specific example I'd like to have the new field Win10Build=19044. I've got a rex expression that ALMOST works for this: | rex field=pluginText (?<Win10Build>\.\d+) But I haven't figured out how to make it so that it only captures the 5 digits after the period and nothing else. This is just the closest my attempts have gotten. Right now it captures the period and everything after it. Any suggestions for how I can refine my search? If nothing else I could include an eval command in my search that filters out the periods and just leaves the 5 digit values, but that seems crude and complicated to me and I'd like elegant and simple. If I could just write a rex expression that filters out the period it'd save the search time and effort, right? Thank you for any replies and for helping me learn!
Hi all, I trained a model using the Smart Forecasting tool (StateSpaceForecasting) algorithm using the Machine Learning Toolkit (MLTK) in Splunk. My goal is to schedule training for this model. Howe... See more...
Hi all, I trained a model using the Smart Forecasting tool (StateSpaceForecasting) algorithm using the Machine Learning Toolkit (MLTK) in Splunk. My goal is to schedule training for this model. However, when I arrive at Operationalize Forecast, I only have the following option: "publish forecasting models". When I look at Splunk Instructional videos, there is an option for scheduled training. However, in my case there is no such option. Can someone guide me in the direction of what I should do to schedule training for this model?
Hi, I've been trying to connect my Cloud NSS from Zscaler to Splunk. If it works and I get a Test 200 : OK, what should I expect to see in my splunk logs? How can I verify from splunk that the con... See more...
Hi, I've been trying to connect my Cloud NSS from Zscaler to Splunk. If it works and I get a Test 200 : OK, what should I expect to see in my splunk logs? How can I verify from splunk that the connectivity test worked? Thanks! Best Regards, Sara 
Hi all,  I'm trying to do something that seems pretty easy conceptually.  I'm ingesting a .txt report into Splunk and I want to set the MetaData Host to the system that the report was generated fro... See more...
Hi all,  I'm trying to do something that seems pretty easy conceptually.  I'm ingesting a .txt report into Splunk and I want to set the MetaData Host to the system that the report was generated from, not the host that Splunk is getting the log from. The problem is, every path I take, creates a different issue that I can't (or really don't want to) deal with.  I've looked through all the docs, and i'm either missing something, misconfiguring something, or it's not possible. From what I understand, there are only a couple ways to perform a host overwrite: 1) Specify a regex path in the inputs.conf stanza to extract the host from the source path, which could be either a folder or the filename; but is the "source" path nonetheless.  2) Specify a regex for the props.conf and transforms.conf, which overwrites the "host" metadata based on the hostname inside the log. 3) Force a specific hostname string through the configuration files, but then this would be a static hostname for the source or sourcetype.   I've gotten all of these solutions to work individually, but each one creates a separate issue which prevents me from using it: 1) Works well, but I end up with hundreds of "source" file paths inside Splunk, which eventually just makes everything cluttered when looking through the datasets, and confuses end-users. I can get around this issue by declaring a "source = <source>" in the inputs.conf, but then that changes the metadata that Splunk uses to regex extract the hostname from.  So instead of getting the hostname from "source::c:\\logs\\client1.txt" it tries to regex the host from "source::<source_name>", which of course it will never find. So it seems like for #1, I either have to deal with a ton of file paths inside Splunk, OR a working host regex extractions. 2) This also seems to work, but brings another issue.  The reports i'm ingesting are pretty large, so I have setup a custom LINE_BREAKER value.  I can successfully extract the hostname from inside the report using props and transforms, but for reasons I can't figure out, the hostname doesn't carry to the rest of the events as it is line broken.  So the first part of the txt file gets the correct host metadata (hostname is in first line of txt), but any line broken event after that, the regex fails and it defaults to the hostname of the system the log resides on.  This really seems to baffle me, because for the time settings, if it can't extract the time in subsequent line_breaks, then it will copy the field from the previous.  So the correct time metadata gets applied to all the events.  But it doesn't do that for the host.  And why would it not apply the host metadata to all event lines as it gets line broken, because Splunk should know, as it's ingesting, that this is all coming from the same "event"? 3) Works, but is not really an option because the reports come from different hosts, and this would just create erroneous data.   Transforms.conf: [SET_HOST_NAME] DEST_KEY = MetaData:Host REGEX = \,HostName\:(.\S[^,]+) FORMAT = host::$1 DEFAULT_VALUE = bonkers   Props.conf: [SCC_Report] TRANSFORMS-H1 = SET_HOST_NAME TIME_PREFIX = SessionID: TIME_FORMAT = %Y-%m-%d_%H%M%S LINE_BREAKER = (\s\s)Title.*\:\sV\- SEDCMD-remove_fluff = s/([\s]+?Weight[\s\Sa-zA-Z0-9~@#$^*()_+=[\]{}|\\,.?:]*?---------------)/\n\n<REDACTED DURING INGESTION>\n/g SHOULD_LINEMERGE = false category = Custom disabled = false   Fields.conf: [H1] INDEXED=true Any help is appreciated.  I can't tell if i'm trying to get Splunk to do something it can't do, or if i'm just going about it the wrong way. Preferred end-state is: 1) ingest *.txt Report 2) Set both "source" and "sourcetype" to something static (prevent a collection of filenames inside sources and sourcetypes) 3) Set the host metadata for all events created from that single txt report to be the host that is in the first line of the report.
Hi, I'm new to Splunk, trying to understand for Splunk we have 1 installation we need to customize it to work as Forwarder or Indexer or Search Head, So want to know which all files need to modify t... See more...
Hi, I'm new to Splunk, trying to understand for Splunk we have 1 installation we need to customize it to work as Forwarder or Indexer or Search Head, So want to know which all files need to modify to work as forwarder or indexer. Correct me if my understanding is wrong. Please advise thanks.
Hey hope you can help i was getting logs for Checkpoint through the UDP port,  we change the IP for the Splunk machine and adjust the endpoint stream in checkpoint to the new IP but then we discove... See more...
Hey hope you can help i was getting logs for Checkpoint through the UDP port,  we change the IP for the Splunk machine and adjust the endpoint stream in checkpoint to the new IP but then we discover that Splunk doesn't accept UDP from outside his