All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I had a request to Onboard the CSV file from a path in source to our splunk Cloud. I have completed the below configurations: Inputs.conf props.conf transforms.conf I can now see ... See more...
Hi All, I had a request to Onboard the CSV file from a path in source to our splunk Cloud. I have completed the below configurations: Inputs.conf props.conf transforms.conf I can now see the data in splunk but I am seeing all the events with same time stamp that I 11:00 am 25th March 2022. I am not able to find out what's the problem Below are my configurations:   props.conf   [sns:CSV] category=Structured INDEXED_EXTRACTIONS=CSV KV_MODE=none FIELD_DELIMITER=, HEADER_FIELD_DELIMITER=, FIELDS_NAMES=field1,field2... and so on TIMESTAMP_FIELDS=stattime TRANSFORMS-eliminate_header = eliminate_header     Transforms.conf   [eliminate_header] REGEX = ^(?: _id) DEST_KEY = queue FORMAT = nullQueue                    
   When I click the Indexes and Volumes>volume_detail_instance,the page has no data to display,and it tips 'Search is waiting to type'. Anyone who can help me solve this problem,thanks.
Hi all, We have events in a single index for flows into and out of a gateway, I’m trying to link an incoming event with the outgoing: search 1: index=vpc | where src=<gateway_out_ip> | table st... See more...
Hi all, We have events in a single index for flows into and out of a gateway, I’m trying to link an incoming event with the outgoing: search 1: index=vpc | where src=<gateway_out_ip> | table starttime, endtime, src, dest search 2: index=vpc | where dest=<gateway_in_ip> AND src=<server_ip> | table starttime, endtime, src, dest   The idea is to join search 1 to search 2 where the starttimes are within 3 seconds of each other, so I can see the dest in search 1 for the <server_ip> In search 2.  I tried using transaction but there aren’t any common data between the two searches.  I only want to include events from search 1 that have a corresponding (within 3 seconds) event in search 2. Can anyone advise on the best way to do this?   Thanks 
Hello Splunkers, We configured Splunk Add-on for VMware ESXi Logs on one of our Heavy Forwarders as in: https://docs.splunk.com/Documentation/AddOns/released/VMWesxilogs/Install However, we can s... See more...
Hello Splunkers, We configured Splunk Add-on for VMware ESXi Logs on one of our Heavy Forwarders as in: https://docs.splunk.com/Documentation/AddOns/released/VMWesxilogs/Install However, we can see a huge bunch of wrongly extracted sourcetypes: e.g.: vmware:esxlog:-- vmware:esxlog:ERROR vmware:esxlog:INFO vmware:esxlog:NoneZ vmware:esxlog:WARNING vmware:esxlog:a vmware:esxlog:a-cli-info-python vmware:esxlog:a-dabc vmware:esxlog:a-e vmware:esxlog:a-vsan-task-tracker vmware:esxlog:ab vmware:esxlog:abc vmware:esxlog:af vmware:esxlog:althSystemImpl We tried to add additional regex in set_syslog_sourcetype in transforms.conf, but then the events stopped coming in at all. Our config files are as follows (all on Heavy Forwarder): inputs.conf [monitor:///opt/splunk/var/log/remote/syslog-tlxfr*.log] disabled = false index = vmware-esxilog sourcetype = vmw-syslog props.conf [vmw-syslog] TRANSFORMS-vmsysloghost=set_host TRANSFORMS-vmsyslogsourcetype = set_syslog_sourcetype MAX_TIMESTAMP_LOOKAHEAD = 20 transforms.conf [set_host] REGEX = ^(?:\w{3}\s+\d+\s+[\d\:]{8}\s+([^ ]+)\s+) DEST_KEY = MetaData:Host FORMAT = host::$1 [set_syslog_sourcetype] REGEX = ^(?:(?:\w{3}\s+\d+\s+[\d\:]{8})|(?:<\d+>)?(?:(?:(?:[\d\-]{10}T[\d\:]{8}(?:\.\d+)?(?:Z|[\+\-][\d\:]{3,5})?))|(?:NoneZ)?)|(?:\w{3}\s+\w{3}\s+\d+\s+[\d\:]{8}\s+\d{4}))\s[^ ]+\s+([A-Za-z\-]+)(?:[^:]*)[:\[] DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::vmware:esxlog:$1 [esx_hostd_fields_6x] REGEX = ^(?:(?:\w{3}\s+\d+\s+[\d\:]{8})|(?:<(\d+)>)?(?:(?:(?:[\d\-]{10}T[\d\:]{8}(?:\.\d+)?(Z|[\+\-][\d\:]{3,5})?))|(?:NoneZ)?)|(?:\w{3}\s+\w{3}\s+\d+\s+[\d\:]{8}\s+\d{4}))\s[^ ]+\s+([^\[\:]+):\s(?:(?:[\d\-:TZ.]+)\s*)?(\w+)\s*(?:\S+\[\S+\])?\s*\[(?:[^\s\]]+)\s*(?:sub=([^\s\]]+))?\s*(?:opID=([^\s\]]+))?(?:[^]]+?)?\]\s*(.*)$ FORMAT = Pri::$1 Offset::$2 Application::$3 Level::$4 Object::$5 opID::$6 Message::$7 Does anyone have any idea how to solve it? Seems to be simple, but we are stuck Greetings, Justyna
So i am in the process of migrating a distributed setup with 1 search head, 1 deployment/license server and 1 index server. I am starting with just testing on the searchhead. I have installed a... See more...
So i am in the process of migrating a distributed setup with 1 search head, 1 deployment/license server and 1 index server. I am starting with just testing on the searchhead. I have installed a fresh install of splunk enterprise on a new linux machine. After that i zipped the splunk/etc folder from the windows machine, copied to the linux, unzipped and replaced the splunk/etc folder there. This new linux splunk server doesnt have a connection to the other servers yet. When i am trying to start it i get the following error: Any ideas?  
Hi, I have encoutered problem regarding adding a custom field to an asset table. I have followed a series of articles published on Splunk blog: - Asset & Identity for Splunk Enterprise Security - P... See more...
Hi, I have encoutered problem regarding adding a custom field to an asset table. I have followed a series of articles published on Splunk blog: - Asset & Identity for Splunk Enterprise Security - Part 1: Contextualizing Systems  - Asset & Identity for Splunk Enterprise Security - Part 2: Adding Additional Attributes to Assets  - Asset & Identity for Splunk Enterprise Security - Part 3: Empowering Analysts with More Attributes in Notables    and read Splunk documentation on this topic.  But for some reason it doesn't work and I don't know how to debug it anymore , thus I am looking for tips on how to troubleshoot this issue. I am sure the ES has access to the asset table, since values of default columns are added into notable index, when a correlation search generates a notable event. The enrichment for assets works but it somehow ignores custom columns in the asset table, even though the custom field is specified in Asset Fields tab in ES.   Version of ES:  6.6.2
Hi,   I have to do gap analysis on splunk  in order to check which all logs are getting ingested and if there are any gaps in it Please help   Thanks, SR
Hi    i am new to splunk and TH  I want to understand how can i check which all logs are being ingested in my clients splunk architecture  Also , is there a way i can look at clients network ... See more...
Hi    i am new to splunk and TH  I want to understand how can i check which all logs are being ingested in my clients splunk architecture  Also , is there a way i can look at clients network architecture from splunk? Thanks in Advance
With below setup, we can setup the single value dashboard with dynamic coloring change while trendValue change.  "trendColor": "> trendValue | rangeValue(trendColorEditorConfig)", Can we som... See more...
With below setup, we can setup the single value dashboard with dynamic coloring change while trendValue change.  "trendColor": "> trendValue | rangeValue(trendColorEditorConfig)", Can we somehow to to decide the color as per the trend percentage instead of trendValue itself? like color 1 in case increase 10%, color 2 in case increase 20% etc.. Thanks.
Hello Is it possible to use a cron that runs a seach every hour ten minutes after hour and just between 7 AM and 19PM? I ave donne this but it just let me to run the search at 7h10 every day 10... See more...
Hello Is it possible to use a cron that runs a seach every hour ten minutes after hour and just between 7 AM and 19PM? I ave donne this but it just let me to run the search at 7h10 every day 10 7 * * * thanks
Hi, I have extracted a new filed "proc_name" from source and added it to table command of existing query and i am generating an email alert which is not showing new filed "proc_name" value in email... See more...
Hi, I have extracted a new filed "proc_name" from source and added it to table command of existing query and i am generating an email alert which is not showing new filed "proc_name" value in email.   host=XXX index=YYY sourcetype=app_logs rc time_taken="*" | search RC>=8 | table client_ip, proc_name, proc_id, RC, Message   client_ip proc_name proc_id RC Message MsgIDLCPS0.   5030 7 Process 'UPROC' #50930 -   RC=7MsgIDLCPS0.
I am in the process of writing a Splunk script that is going to overwrite the contents of a lookup file using REST. However, the issue I am hitting is how to authenticate against the REST endpoint. ... See more...
I am in the process of writing a Splunk script that is going to overwrite the contents of a lookup file using REST. However, the issue I am hitting is how to authenticate against the REST endpoint. I am planning on having Splunk running the script ( probably through inputs.conf). It would every x hours and update the lookup using a python script that calls an outside source. I can successfully call the outside source and parse the data, however I am stuck on how to overwrite the lookup table via REST. All examples of REST calls show passing credentials. I dont want to hardcode any admin creds on the script itself. I found this article from splunk, but the REST section clearly shows they are passing creds. Are there any other ways to do this?  https://www.splunk.com/en_us/blog/tips-and-tricks/store-encrypted-secrets-in-a-splunk-app.html Any suggestions?
In our outputs.conf for our splunk forwarders we have two tcpout target groups ([tcpout:<target_group>]) . Both tcpout groups have multiple servers/are autolb'd. Our second tcpout group (remote splu... See more...
In our outputs.conf for our splunk forwarders we have two tcpout target groups ([tcpout:<target_group>]) . Both tcpout groups have multiple servers/are autolb'd. Our second tcpout group (remote splunk instance) became unavailable due to a network issue, which caused all of our splunk forwarder's local queues to fill up and block forwarding totally (both groups) as they were no longer able to forward data to the second group. I'm looking into solutions by using outputs.conf, namely the tcpout settings, maxQueueSize and dropEventsOnQueueFull - a combination of these seems like it will solve our problem, however on reading the documentation here (https://docs.splunk.com/Documentation/Splunk/8.2.5/Admin/Outputsconf), under dropEventsOnQueueFull: * CAUTION: DO NOT SET THIS TO A POSITIVE INTEGER IF YOU ARE MONITORING FILES. I am monitoring files - so this seems like a deal breaker? Is somebody help me understand why we wouldn't want to configure this setting if we're monitoring files? Or should we simply set this to 0 (not a positive integer)? If there's an outage of the second tcpout group, we're fine with losing some data for that site if that's the price of keeping the forwarders continuing to report to our first tcpout group. Hope that makes sense! Thanks in advance!  
Hey there, pretty new to Splunk searching. I am trying to get a table created that will combine search results based on SerialNumber and split them into 3 columns but one row.  Currently:  `main_... See more...
Hey there, pretty new to Splunk searching. I am trying to get a table created that will combine search results based on SerialNumber and split them into 3 columns but one row.  Currently:  `main_index` SerialNumber IN (XXX-XXX-XXX) | search "DUKPT KEY #" OR "type=DUKPT" | rex "DUKPT (Key|KEY) #(?<slot>[0-9]): \[ Status: (?<Status>[A-Z_]+)" | rex "KSN:(?<Key>.+)\]" | eval slot = if(LIKE (ApplicationVersion,"6.%"), slot, slot -1) | eval Key = if(LIKE (ApplicationVersion,"6.%"), ("Slot #".slot.": KSN: ".ksn),if(Status="KEY_PRESENT","Slot #".slot.": KSN: ".Key,"Slot #".slot.": No Key Loaded")) | dedup slot SerialNumber | table SerialNumber Key | sort slot Result: Desired Result: SerialNumber, Slot0, Slot1, Slot2 XXX-XXX-XXX, No Key Loaded, No Key Loaded, No Key Loaded I've tried Transpose, Transaction (which merged the entries into one row, but I couldn't figure out how to split the entries into their own column/field)
Hi Guys,  I am trying to do a search and also at the same time drop certain information from showing up. As seen from the table below  , there is this user [ghjkl-hh123-wer56] that shows up.  ... See more...
Hi Guys,  I am trying to do a search and also at the same time drop certain information from showing up. As seen from the table below  , there is this user [ghjkl-hh123-wer56] that shows up.  Can I know what must I do from the search string such that usernames like the above no longer show up? Please advise. username hostname user1 host1 user2 host2 ghjkl-hh123-wer56 host3 ghjkl-hh123-wer56 host4 user3 host4 Hope this clarifies Thank You regards, Alex
The purpose of this topic is to create a home for legacy diagrams on how indexing works in Splunk, created by the legendary Splunk Support Engineer, Masa! Keep in mind the information and diagrams in... See more...
The purpose of this topic is to create a home for legacy diagrams on how indexing works in Splunk, created by the legendary Splunk Support Engineer, Masa! Keep in mind the information and diagrams in this topic have not been updated since Splunk Enterprise 7.2. These used to live on an old Splunk community Wiki resource page that has been or will be taken down in the future, but many users have expressed that these have been and still are helpful.  Happy learning!
I've been developing a dashboard that leverages a timeline viz but having a considerable time adding css/html to remove the text-overflow of the labels. I see it's implementing the text.timeline.labe... See more...
I've been developing a dashboard that leverages a timeline viz but having a considerable time adding css/html to remove the text-overflow of the labels. I see it's implementing the text.timeline.label class but I'm far from an expert in js or html for that matter....any insights that people have used to solve this?
I have been having issues trying to download the Splunk SDK to my .NET 5 project and wondering if there are compatibility issues?
Hi All, Does the recently announced security vulnerability CVE-2021-3422 also apply to HWFs and IF that might be receiving and/or cooking data? Thanks
Hi, I want eventgen (installed as an App) to continuously reply 3 events just replacing the timestamp       # etc/apps/my_app/local/eventgen.conf [host_perf_monitor.sample] mode = replay interva... See more...
Hi, I want eventgen (installed as an App) to continuously reply 3 events just replacing the timestamp       # etc/apps/my_app/local/eventgen.conf [host_perf_monitor.sample] mode = replay interval = 15 earliest = -15s latest = now outputMode = file fileName = /opt/tmp/host_cpu.log token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} token.0.replacementType = replaytimestamp token.0.replacement = %Y-%m-%d %H:%M:%S # etc/apps/my_app/samples/host_perf_monitor.sample 2022-03-24 01:01:10 host=linux_server_01 status=WARNING object=cpu_used_pct value=66 2022-03-24 01:01:12 host=linux_server_01 status=ERROR object=cpu_used_pct value=85 2022-03-24 01:01:14 host=linux_server_01 status=GOOD object=cpu_used_pct value=44         if the eventgen would run at 08:38:45 I would expect that the output will be     2022-03-25 08:38:41 host=linux_server_01 status=WARNING object=cpu_used_pct value=66 2022-03-25 08:38:43 host=linux_server_01 status=ERROR object=cpu_used_pct value=85 2022-03-25 08:38:45 host=linux_server_01 status=GOOD object=cpu_used_pct value=44     but it is (first and second event are at the same time)     2022-03-25 08:38:43 host=linux_server_01 status=WARNING object=cpu_used_pct value=66 2022-03-25 08:38:43 host=linux_server_01 status=ERROR object=cpu_used_pct value=85 2022-03-25 08:38:45 host=linux_server_01 status=GOOD object=cpu_used_pct value=44     Tried with other event "spreads" as well (for example on 01, 34, 45 seconds) and will still always get 1st and 2nd events with the same timestamp. Thanks