All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

So I've searched and searched but can't seem to find an answer to my issue. I need to add an all option to my dynamic dropdown. I have found answers that seem like they should be simple enough. Eithe... See more...
So I've searched and searched but can't seem to find an answer to my issue. I need to add an all option to my dynamic dropdown. I have found answers that seem like they should be simple enough. Either add All, * to static or alter the XML code. I've tried both, (I think when I altered the XML code it pretty much caused the dropdown to be the exact same way as it was had I just added the options to the static section) and each time I am getting a "search string cannot be empty" error. Don't know if it matters but I did watch a couple youtube videos, their search used | table fieldname | dedup fieldname at the end, when I did that I got the same issue, but now all the field values are grouped together, so I'm doing | stats count by fieldname at the end
Hi!  We've set up an Eventhub input using the Splunk Add-on for MS Cloud Services, and we are getting events successfully into Splunk. The problem is that the events are not formatted correctly w... See more...
Hi!  We've set up an Eventhub input using the Splunk Add-on for MS Cloud Services, and we are getting events successfully into Splunk. The problem is that the events are not formatted correctly when indexed into Splunk.  JSON formatted events are indexed into Splunk with all the quotation marks escaped. This messes up the syntax highlighting, and the auto field extractions. The sourcetype used during eventhub config is mscs:azure:eventhub, as per the docs states. The following code is used to generate test data, and the rendered result is in the screenshot.  Anyone seen the same, or resolved it somehow?   #Method 1 - You provide a JSON string body1 = "{'id':'device2','goo':'frodo'}" event_data_batch.add(EventData(body1)) #Method 2 - You get the JSON Object and convert to string json_obj = {"id": "device3", "goo": "bilbo"} body2 = json.dumps(json_obj) event_data_batch.add(EventData(body2)) #This just sending the string which will not be captured by TSI event_data_batch.add(EventData('Third event'))        
We've had several changes going on to some dashboards I've been doing, including new data.  Where we used to be dealing with only PRD data, we're adding some TST data activity as well.   The chart I'... See more...
We've had several changes going on to some dashboards I've been doing, including new data.  Where we used to be dealing with only PRD data, we're adding some TST data activity as well.   The chart I'm trying to do is a to show counts of activity by PRD and TST, stacked, for each of our 3 current users over the last 7 months.  We want an at a glance view of how much work is being done, by whom and where, and how one user compares to the other.   I can do it as separate charts, but it can be confusing.    1 person's count scale peaks at 25 where the other peaks at 66, so if you don't look at the fine print, User A doesn't look like they are doing a third of the work of User B.     I've tried several variations of charts, timecharts, etc., but either they don't work, combine PRD/TST into one total or they don't stack.   Best result for me would be one column (or bar if need be) per user per month, with two separate totals for PRD and TST counts, stacked on each other.       
Hello,  I have to avoid matching several values in a fields.  The following works, but I"m wondering if there is a more efficient way...  index=wineventlog host="myhost" EventCode=7036 | regex Messa... See more...
Hello,  I have to avoid matching several values in a fields.  The following works, but I"m wondering if there is a more efficient way...  index=wineventlog host="myhost" EventCode=7036 | regex Message!="WMI" | regex Message!="WinHTTP"
Following this: https://docs.splunk.com/Documentation/Splunk/9.0.0/Security/ConfigureandinstallcertificatesforLogObserver Our current certificate is working, but is there anything I need to follow... See more...
Following this: https://docs.splunk.com/Documentation/Splunk/9.0.0/Security/ConfigureandinstallcertificatesforLogObserver Our current certificate is working, but is there anything I need to follow when using ACME. Once the certificate is in and working, what needs to be done? Do I need to just restart Splunk and expect it to work? Is there anything else needing to be done for them to be automatically be updated?
Hello - I am trying to troubleshoot an issue and have not had much success in determining a root cause. I was wondering if anybody else has ever seen this issue. I inherited this cluster about a y... See more...
Hello - I am trying to troubleshoot an issue and have not had much success in determining a root cause. I was wondering if anybody else has ever seen this issue. I inherited this cluster about a year ago, which is a distributed cluster hosted mostly in our own cloud environment with a few parts located on-prem. The indexer tier consists of a CM with 3 peer nodes. The search tier is two SH (not setup as a cluster), one being the main SH used by the company, and the other SH is running ES and is also he LM for the deployment. We have a license quota of 300Gb/Day, and until very recently I believed we were averaging around 280-320 Gb/day during the workweek.   In the past year, we have been told on multiple occasions that we are exceeding our license quota, and it has been a continuous effort to monitor and reduce our sourcetypes when possible to manage our daily ingestion volume. This has been something that we have spent a lot of time trying to maintain. Doing some log analysis on our sourcetypes, I have discovered what I believe is a duplicate scenario somewhere between the ingest point and the indexer tier. In a rough example to show my case, I have been trying to take measurements of our ingest to better understand what is happening. Using the method recommended in the Splunk Forums, I started out measuring our ingest using this base search.      index=_internal sourcetype=splunkd source=*license_usage.log type=Usage earliest=-1d@d | stats sum(b) as bytes by idx | eval gb=round(bytes/1024/1024/1024,3)     This sum came out to be between 280-300 Gb /day. I then tried to measure the bytes in the logs using the len() aggregate function, in a search like this:     index=* earliest=-1d@d | eval EventSize=len(_raw) | stats sum(EventSize) by index       This search sums out to around 150 Gb /day.  From my understanding, this is completely the opposite of what I expected. I did not expect the numbers to be exactly the same, but being roughly double does not make sense to me. If I am ingesting bytes, and Splunk is counting as (bytes)*2, this is a big issue for us. Please let me know if I am incorrect in this assumption.  Another case example. To reproduce this phenomena, I created a scenario that ingests a small amount of data to hopefully make measuring the difference easier to do.  I created a script that makes an API call to another solution in our network, to pull down information about its clients. Since this list is mostly static, I figured it would be a nice and small data source that I can work with. Running the script locally in a bash shell returns 606 json events. When redirected to a local file, these 606 events equal 396,647 bytes in size.  Next I put this script in an app on my searchhead. I created a sourcetype specifically for the json data that the API call will be returning. I created a Script modular input to execute the API call every 60 seconds.  I enabled the modular input, let it execute once, then disabled it. Looking for the logs in search, Splunk is calling it 1210 events and checking the len(_raw) shows 792,034 bytes. This seems to be a very large issue for us, and it seems to be effecting every sourcetype that we have. I did a similar test on our DNS logs, and our firewall logs, which are our two largest sourcetypes. A close examination of the local log file, shows logs that are near exactly half the size in bytes as the same _raw log examined in Splunk. Has anyone every seen a issue like this before? I have a Case opened with Splunk but so far it has not yielded any actionable results and we are going on 2 weeks now of back and forth. Any ideas or insights would be greatly appreciated. TY  
I am using a single-value visualization panel and having a drill-down issue. The link opening was correctly configured.
Hi team ,   How to reduce the input text font size in multiselect drop down filter . please let me know
I have the following query:    application_id=12345 STATUS_CODE IN (300, 400, 500)| head 10   How can I modify this such that I can get 2 unique rows where STATUS_CODE is 300, 2 unique rows where... See more...
I have the following query:    application_id=12345 STATUS_CODE IN (300, 400, 500)| head 10   How can I modify this such that I can get 2 unique rows where STATUS_CODE is 300, 2 unique rows where STATUS_CODE is 400, 2 unique rows where STATUS_CODE is 500 and so on?  Above query ends up fetching 10 rows of the first ones it can find thus end up with all 10 rows as STATUS_CODE as 300 in correctly. Pls advice. Thanks.  
Hello, I've been searching the internet for quite a while. But can't find any approach. I have a primary search that looks for IP networks in a CSV based on various parameters, such as location (... See more...
Hello, I've been searching the internet for quite a while. But can't find any approach. I have a primary search that looks for IP networks in a CSV based on various parameters, such as location (inputlookup), and then creates a CIDR including the bit length of the subnet mask. Based on this search, I want to search for IPs in a second table. In principle, I have already implemented this in a (initially poor) solution by using a token that I pass from one search to the other and then use a CIDRMATCH there. This works fine as long as I only have a one-to-one search result in the first search Now I have the problem that the first search returns multiple results (e.g. multiple subnets at one location) and I want to search for matching IPs in the second CSV for all found subnets. This is what the first search (already defined as base search) looks like: <search id="base"> <query> | inputlookup list_of_subnet_sand_sites | search City="*" Street="*" NetIP="10.5.*.*" | rename NetMask AS mask | lookup ip_mask_prefix.csv mask OUTPUT prefix | rename mask AS NetMask | eval CIDRNet_mv = mvappend(NetIP , "/", prefix) | eval CIDRNet = mvjoin(CIDRNet_mv, "") </query> <done> <set token="CIDR_tok">$result.CIDRNet$</set> </done> </search> The first search displays perhaps 25 different IP subnets.  And the second search is  (Currently I don't make use of the BS, but I want to). <search> <query> | inputlookup list_of_devices | where cidrmatch("$CIDR_tok$", devIP) | sort devIP </query> </search> I tried already something with subsearches, lookups, append and  appendpipe. Thank you all.
Hi, We have upgraded from version 8.1.6 to version 9.0.1 recently and have discovered a new problem not seen before. Each time I push the apps from the deployer to the SHC all apps get pushed regar... See more...
Hi, We have upgraded from version 8.1.6 to version 9.0.1 recently and have discovered a new problem not seen before. Each time I push the apps from the deployer to the SHC all apps get pushed regardless if they are old, modified or new. This results in each apply shcluster bundle takes 5-6 hours to complete. Before the upgrade a push took a couple of minutes. Each push also generates a default.old.<DATE> for all apps. Is there a way to remove this behaviour? We prefer not to have a lot of default.old.<DATA> files and that the pushes become much faster. Thanks!
Hello,      | transaction RRN keepevicted=t | search date_hour <6 If I execute this search with a specific date(10-10-2022) I get 5 events.  If I execute this search with preset "all-time" I ge... See more...
Hello,      | transaction RRN keepevicted=t | search date_hour <6 If I execute this search with a specific date(10-10-2022) I get 5 events.  If I execute this search with preset "all-time" I get no results. If I execute this search with preset "last 30 days"  I get no results. All searches done in verbose mode. Why don't I get results with preset "all-time" and/or " last 30 days" Thanks
Hello, I need to take events with two kind of text (different paths) : Appended to:  G:\Streamserve\ Appended to:  D:\G_volume\Streamserve\ As you can see the is part in the middle that should... See more...
Hello, I need to take events with two kind of text (different paths) : Appended to:  G:\Streamserve\ Appended to:  D:\G_volume\Streamserve\ As you can see the is part in the middle that should be different (I have only those 2 kind of cases). I tried with \S* as non whitespace characters but it's not working.   sth like this Appended to: \w?:\\(G_volume)*\\*Streamserve What's is the easiest way to do it?  Thanks fo the help
Hi guys, I'm monitoring external Web Server logs and want to run an Alert detecting errors caused by other IP addresses than my own WAN IP assigned from my Internet provider. Though, I thought I'... See more...
Hi guys, I'm monitoring external Web Server logs and want to run an Alert detecting errors caused by other IP addresses than my own WAN IP assigned from my Internet provider. Though, I thought I'd run a script during index-time for sourcetype "http:errors:linux" that gives back just my external IP into new/additional field "own_wan_ip". This needs to be done at index-time and not at search time as external IP changes quite often leading to wrong "own_wan_ip" when adding at search time. My python mini-script: #!/usr/bin/env python3 import os os.system('dig +short txt ch whoami.cloudflare @1.0.0.1') I tried to find some solution in Splunk's documentation like this https://docs.splunk.com/Documentation/Splunk/8.2.7/Data/Configureindex-timefieldextraction but there's only regex-based creation. If anyone could provide me a solution for this i'd really be happy. Thanks in advance
Team, I need your assistance with the below task. I need to migrate Splunk sh-2 (Non ES instance) from Cent OS to REDHAT from one VM to another VM. I would appreciate it if you can provide step... See more...
Team, I need your assistance with the below task. I need to migrate Splunk sh-2 (Non ES instance) from Cent OS to REDHAT from one VM to another VM. I would appreciate it if you can provide step by step guide for this migration. Note: We need to maintain the same IP address / Host Name of the existing VM ( Splunk Server).
Hello Splunkers, I have a really quick question, I want to create and push (via my DS) a fully custom Add-On (or TA... not sure how to call it) to some of my UFs. Basically I only need a inputs... See more...
Hello Splunkers, I have a really quick question, I want to create and push (via my DS) a fully custom Add-On (or TA... not sure how to call it) to some of my UFs. Basically I only need a inputs.conf to monitor some log files, but I do not know if I should place it under default/ or local/ folder.  I know that for Splunk based TA, I would have overwritten the defaults config file with my local config files, but for a custom app I don't really know.  Should I create inputs.conf inside default/ folder but with "disabled = true" for all stanza, and overwrite it with my local inputs.conf ? Thanks for your help, Gaetan
Hello all, I have a Splunk server update. We have an update to our Splunk server and I am trying to figure out the workflow. Current version 8.2. The new server is 9.0. I want to restore the bac... See more...
Hello all, I have a Splunk server update. We have an update to our Splunk server and I am trying to figure out the workflow. Current version 8.2. The new server is 9.0. I want to restore the backup files of the current version 8.2 to the new server version 9.0. Is it possible to restore the backup file of version 8.2 directly to version 9.0? Or, is it necessary to build a new device with version 8.2, restore it, and then upgrade to version 9.0?
I need to create a new field to assign to the top results of a command using eval.  Obviously this syntax doesn't work, so I'm looking for the correct query: source="tutorialdata.zip*" | eval pop... See more...
I need to create a new field to assign to the top results of a command using eval.  Obviously this syntax doesn't work, so I'm looking for the correct query: source="tutorialdata.zip*" | eval popular = top limit=5 itemId | stats count(action) by popular, action Basically I only need the action stats of the top 5 itemId results of the following:   Sorry for the n00b question; I am just getting started with Splunk. Thanks for your time!
Hi All The Windows Splunk UF has a process splunk-winevtlog.exe that reads the eventlog. I am seeing on a small subset of servers that this process is consuming 100% of a virtual machine CPU (vCPU)... See more...
Hi All The Windows Splunk UF has a process splunk-winevtlog.exe that reads the eventlog. I am seeing on a small subset of servers that this process is consuming 100% of a virtual machine CPU (vCPU) on a small subset of servers. In many cases this behaviour can go on for days/weeks until noticed (i.e. it is not burst-type usage). Splunk UF is v8.2.1 In some cases I see this behaviour where of two servers built at the same time and identically configured, one is fine and one is using 100% CPU. Attempting to set a current_only=1 value in inputs.conf stanzas does not resolve the issue. Restart service does not resolve issue - it manifests again after a few mins Restart host does not resolve issue - it manifests again after a few mins I would be keen to hear if anyone has had similar experiences and how they have remediated the issue. Cheers!
Below is my spl   |from datamodel:"Threat_Intelligence".""Threat_Activity" |dedup threat_match_field,threat_match_value |search NOT [|inputintelligence cisco_top_million_sites |rename domain as t... See more...
Below is my spl   |from datamodel:"Threat_Intelligence".""Threat_Activity" |dedup threat_match_field,threat_match_value |search NOT [|inputintelligence cisco_top_million_sites |rename domain as threat_match_value |table threat_match_value]    Explanation: basically from any threat activity detected, I want to remove false positives domains detected by using the cisco_top_million_sites as a reference to exclude FP domains. However, the part where domains in threat_match_value is compared to domains in cisco_top_million_sites  threat intel file, some domains are not getting excluded. Its mainly the content.dropboxapi.com domain which still appears in the results even though its in the threat intel file while other sub domains of the dropboxapi.com are excluded. Can someone please help with fixing this ?