All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm looking to set a variable (customerLabel) depending on whether the user selects "framework" or "team" from a dropdown list. The token set with the dropdown is $grouping-name$. Where am I going wr... See more...
I'm looking to set a variable (customerLabel) depending on whether the user selects "framework" or "team" from a dropdown list. The token set with the dropdown is $grouping-name$. Where am I going wrong as customerLabel is not being set with a value at all.   I've included a snippet of the code below:     | eval teamCustomerLabel=case(issueLabel="customer1", "Customer 1", issueLabel="customer2", "Customer 2", issueLabel="customer3", "Customer 3", issueLabel="customer4", "Customer 4", issueLabel="customer5", "Customer 5", issueLabel="customer6", "Customer 6") | eval frameworkCustomerLabel=case(issueLabel="customer1", "Group 1", issueLabel="customer2", "Group 1", issueLabel="customer3", "Group 2", issueLabel="customer4", "Group 2", issueLabel="customer5", "Group 3", issueLabel="customer6", "Group 3") | eval customerLabel=case("$grouping-name$"=="framework", frameworkCustomerLabel, "$grouping-name$"=="team", teamCustomerLabel) | chart count(key) as "Created" over _time by customerLabel where top 50        
I find that the logs are not pointing to the right source/sourcetype. Logs are going to source= WinEventLog:Application and sourcetype="WinEventLog" instead of source="WinEventLog:Security" source... See more...
I find that the logs are not pointing to the right source/sourcetype. Logs are going to source= WinEventLog:Application and sourcetype="WinEventLog" instead of source="WinEventLog:Security" sourcetype="WinEventLog:Security". Can someone help me fix this to get the right source/sourcetype   index=*_windows (sourcetype="WinEventLog:Security" OR source="WinEventLog:Security") EventCode=1102 OR EventCode=517 Message="The audit log was cleared*" | bucket span=1h _time | stats count by _time, user, ComputerName, signature, index | eval index=case(index="***_appliances","***_appliances",index="***_windows","***_windows",index="***_linux","***_linux",1=1,index) | eval AF="0007" | lookup ****_Thresholds.csv index AF OUTPUT Threshold ID Sev | fillnull value="UNKNOWN" ID | fillnull value=9999999 Threshold | where count>Threshold | fields - index AF
hi I need to sort a field list which below with an uppercase letter followed by "- N" How to do please?
Splunk is so nice, they made config management systems thrice! The index manager, deployment server, and SHC deployer let you centralize configuration which can then be pushed to (or pulled by) the r... See more...
Splunk is so nice, they made config management systems thrice! The index manager, deployment server, and SHC deployer let you centralize configuration which can then be pushed to (or pulled by) the rest of your splunk infrastructure. But how do you manage the config on these configuration management systems? Is it simply someone SSH'ing into them and updating config files? Do you do something more sophisticated than that? Any and all answers welcome! The most charming answer will be selected as the best answer!
Hi All, I configured the MS add-on from a eventhub to gettin in splunk all security alert from Defender for cloud. seems  splunk can't collect some alerts I don't understand why. The eventhub is p... See more...
Hi All, I configured the MS add-on from a eventhub to gettin in splunk all security alert from Defender for cloud. seems  splunk can't collect some alerts I don't understand why. The eventhub is properly configured because I see all the logs from the eventhub also I see some security alerts but not all. the only thing give me a suspition is the eventhub have 3 consumergroup and the input is configured only one consumer group any helps?
Hi, I have 3 indexes. I need to extract hash_values from index 3 and do a search to see if similar files exists in index 1 & 2 as well. Index1 &2 has same field names, whereas index 3 has 3 dif... See more...
Hi, I have 3 indexes. I need to extract hash_values from index 3 and do a search to see if similar files exists in index 1 & 2 as well. Index1 &2 has same field names, whereas index 3 has 3 different fields with the values. Now I need to have all these values in a single field and then do a search to compare if similar files exists in other indexes  Details: ===== Index=1, sourcetype=1, hash_file Index=2, sourcetype=2, hash_file Index=3, sourcetype=3, hash_md5, hash_sha1, hash_sha256 Could someone please help me with a SPL?
Hi, I have configured the Infosec App in my splunk making sure that i had all the steps in prerequisites completed. It was working for a couple of days, but it suddenly sttoped showing data. I have... See more...
Hi, I have configured the Infosec App in my splunk making sure that i had all the steps in prerequisites completed. It was working for a couple of days, but it suddenly sttoped showing data. I have CIM for splunk and I can see in the health panel from infosec that the acceleration for the data models is working but I'm only recieving event and details from the Authentication and Change data model. going through this documentation https://docs.splunk.com/Documentation/InfoSec/1.7.0/Admin/ValidateDataSources#Identify_tagged_events_to_configure_the_data_models I have checked that only Authentication and Change are getting data, not the rest. If I try to follow the guide there is no tags for the rest of the datamodels. Is this why infosec stopped working? Can anyone help with this? Thank you. Regards
My data consists of individual messages, tagged with the userID of the user who sent them. I want to count the number of users who say "burger" and "fries", but not necessarily in the same message. ... See more...
My data consists of individual messages, tagged with the userID of the user who sent them. I want to count the number of users who say "burger" and "fries", but not necessarily in the same message. In the example UserID Message 1 "I'd like to order a burger" 2 "The weather is nice" 1 "I'd also like some fries" 2 "I'd only like a burger"   User 1 should be counted by user 2 shouldn't. I believe a way to do this would be inner joining by the userID on two separate searches       index=idx_chatbot_gb_p component=chatbot-ms | spath "userID" | spath input=payload output=Message path=messages.message{}.plaintext | search (Message=* burger *) | join type=inner userID [ seach (Message=* fries *) ]         I get zero results when I try this, even though I get results on the individual searches and many users order burgers and fries. Does anyone know a better way to do this or can spot what I've done wrong? Thanks
Hello, it is possible to get data from kafka into splunk saas?
Hi, How can we create an alert on database agent availability  similar to App or Machine agent. I am unable to find similar metric for DB Agent. Regards, Mohit 
Hi All, I had a request to Onboard the CSV file from a path in source to our splunk Cloud. I have completed the below configurations: Inputs.conf props.conf transforms.conf I can now see ... See more...
Hi All, I had a request to Onboard the CSV file from a path in source to our splunk Cloud. I have completed the below configurations: Inputs.conf props.conf transforms.conf I can now see the data in splunk but I am seeing all the events with same time stamp that I 11:00 am 25th March 2022. I am not able to find out what's the problem Below are my configurations:   props.conf   [sns:CSV] category=Structured INDEXED_EXTRACTIONS=CSV KV_MODE=none FIELD_DELIMITER=, HEADER_FIELD_DELIMITER=, FIELDS_NAMES=field1,field2... and so on TIMESTAMP_FIELDS=stattime TRANSFORMS-eliminate_header = eliminate_header     Transforms.conf   [eliminate_header] REGEX = ^(?: _id) DEST_KEY = queue FORMAT = nullQueue                    
   When I click the Indexes and Volumes>volume_detail_instance,the page has no data to display,and it tips 'Search is waiting to type'. Anyone who can help me solve this problem,thanks.
Hi all, We have events in a single index for flows into and out of a gateway, I’m trying to link an incoming event with the outgoing: search 1: index=vpc | where src=<gateway_out_ip> | table st... See more...
Hi all, We have events in a single index for flows into and out of a gateway, I’m trying to link an incoming event with the outgoing: search 1: index=vpc | where src=<gateway_out_ip> | table starttime, endtime, src, dest search 2: index=vpc | where dest=<gateway_in_ip> AND src=<server_ip> | table starttime, endtime, src, dest   The idea is to join search 1 to search 2 where the starttimes are within 3 seconds of each other, so I can see the dest in search 1 for the <server_ip> In search 2.  I tried using transaction but there aren’t any common data between the two searches.  I only want to include events from search 1 that have a corresponding (within 3 seconds) event in search 2. Can anyone advise on the best way to do this?   Thanks 
Hello Splunkers, We configured Splunk Add-on for VMware ESXi Logs on one of our Heavy Forwarders as in: https://docs.splunk.com/Documentation/AddOns/released/VMWesxilogs/Install However, we can s... See more...
Hello Splunkers, We configured Splunk Add-on for VMware ESXi Logs on one of our Heavy Forwarders as in: https://docs.splunk.com/Documentation/AddOns/released/VMWesxilogs/Install However, we can see a huge bunch of wrongly extracted sourcetypes: e.g.: vmware:esxlog:-- vmware:esxlog:ERROR vmware:esxlog:INFO vmware:esxlog:NoneZ vmware:esxlog:WARNING vmware:esxlog:a vmware:esxlog:a-cli-info-python vmware:esxlog:a-dabc vmware:esxlog:a-e vmware:esxlog:a-vsan-task-tracker vmware:esxlog:ab vmware:esxlog:abc vmware:esxlog:af vmware:esxlog:althSystemImpl We tried to add additional regex in set_syslog_sourcetype in transforms.conf, but then the events stopped coming in at all. Our config files are as follows (all on Heavy Forwarder): inputs.conf [monitor:///opt/splunk/var/log/remote/syslog-tlxfr*.log] disabled = false index = vmware-esxilog sourcetype = vmw-syslog props.conf [vmw-syslog] TRANSFORMS-vmsysloghost=set_host TRANSFORMS-vmsyslogsourcetype = set_syslog_sourcetype MAX_TIMESTAMP_LOOKAHEAD = 20 transforms.conf [set_host] REGEX = ^(?:\w{3}\s+\d+\s+[\d\:]{8}\s+([^ ]+)\s+) DEST_KEY = MetaData:Host FORMAT = host::$1 [set_syslog_sourcetype] REGEX = ^(?:(?:\w{3}\s+\d+\s+[\d\:]{8})|(?:<\d+>)?(?:(?:(?:[\d\-]{10}T[\d\:]{8}(?:\.\d+)?(?:Z|[\+\-][\d\:]{3,5})?))|(?:NoneZ)?)|(?:\w{3}\s+\w{3}\s+\d+\s+[\d\:]{8}\s+\d{4}))\s[^ ]+\s+([A-Za-z\-]+)(?:[^:]*)[:\[] DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::vmware:esxlog:$1 [esx_hostd_fields_6x] REGEX = ^(?:(?:\w{3}\s+\d+\s+[\d\:]{8})|(?:<(\d+)>)?(?:(?:(?:[\d\-]{10}T[\d\:]{8}(?:\.\d+)?(Z|[\+\-][\d\:]{3,5})?))|(?:NoneZ)?)|(?:\w{3}\s+\w{3}\s+\d+\s+[\d\:]{8}\s+\d{4}))\s[^ ]+\s+([^\[\:]+):\s(?:(?:[\d\-:TZ.]+)\s*)?(\w+)\s*(?:\S+\[\S+\])?\s*\[(?:[^\s\]]+)\s*(?:sub=([^\s\]]+))?\s*(?:opID=([^\s\]]+))?(?:[^]]+?)?\]\s*(.*)$ FORMAT = Pri::$1 Offset::$2 Application::$3 Level::$4 Object::$5 opID::$6 Message::$7 Does anyone have any idea how to solve it? Seems to be simple, but we are stuck Greetings, Justyna
So i am in the process of migrating a distributed setup with 1 search head, 1 deployment/license server and 1 index server. I am starting with just testing on the searchhead. I have installed a... See more...
So i am in the process of migrating a distributed setup with 1 search head, 1 deployment/license server and 1 index server. I am starting with just testing on the searchhead. I have installed a fresh install of splunk enterprise on a new linux machine. After that i zipped the splunk/etc folder from the windows machine, copied to the linux, unzipped and replaced the splunk/etc folder there. This new linux splunk server doesnt have a connection to the other servers yet. When i am trying to start it i get the following error: Any ideas?  
Hi, I have encoutered problem regarding adding a custom field to an asset table. I have followed a series of articles published on Splunk blog: - Asset & Identity for Splunk Enterprise Security - P... See more...
Hi, I have encoutered problem regarding adding a custom field to an asset table. I have followed a series of articles published on Splunk blog: - Asset & Identity for Splunk Enterprise Security - Part 1: Contextualizing Systems  - Asset & Identity for Splunk Enterprise Security - Part 2: Adding Additional Attributes to Assets  - Asset & Identity for Splunk Enterprise Security - Part 3: Empowering Analysts with More Attributes in Notables    and read Splunk documentation on this topic.  But for some reason it doesn't work and I don't know how to debug it anymore , thus I am looking for tips on how to troubleshoot this issue. I am sure the ES has access to the asset table, since values of default columns are added into notable index, when a correlation search generates a notable event. The enrichment for assets works but it somehow ignores custom columns in the asset table, even though the custom field is specified in Asset Fields tab in ES.   Version of ES:  6.6.2
Hi,   I have to do gap analysis on splunk  in order to check which all logs are getting ingested and if there are any gaps in it Please help   Thanks, SR
Hi    i am new to splunk and TH  I want to understand how can i check which all logs are being ingested in my clients splunk architecture  Also , is there a way i can look at clients network ... See more...
Hi    i am new to splunk and TH  I want to understand how can i check which all logs are being ingested in my clients splunk architecture  Also , is there a way i can look at clients network architecture from splunk? Thanks in Advance
With below setup, we can setup the single value dashboard with dynamic coloring change while trendValue change.  "trendColor": "> trendValue | rangeValue(trendColorEditorConfig)", Can we som... See more...
With below setup, we can setup the single value dashboard with dynamic coloring change while trendValue change.  "trendColor": "> trendValue | rangeValue(trendColorEditorConfig)", Can we somehow to to decide the color as per the trend percentage instead of trendValue itself? like color 1 in case increase 10%, color 2 in case increase 20% etc.. Thanks.
Hello Is it possible to use a cron that runs a seach every hour ten minutes after hour and just between 7 AM and 19PM? I ave donne this but it just let me to run the search at 7h10 every day 10... See more...
Hello Is it possible to use a cron that runs a seach every hour ten minutes after hour and just between 7 AM and 19PM? I ave donne this but it just let me to run the search at 7h10 every day 10 7 * * * thanks