All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  I am new to splunk, and I need some help with SPL query to execute the below user agent Log File -  " Mozilla/5.0 (Linux; Android 9; SAMSUNG SM-J330G) AppleWebKit/537.36 (KHTML, like Gecko) Sa... See more...
Hi  I am new to splunk, and I need some help with SPL query to execute the below user agent Log File -  " Mozilla/5.0 (Linux; Android 9; SAMSUNG SM-J330G) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/12.1 Chrome/79.0.3945.136 Mobile Safari/537.36  Expecting output -  Operating System Mobile Device Info Device Model Browser  Browser version Android / IOS Samsung/ Iphone Sm -J330G / I phone SE Chrome / Mozilla 79.0.3945   Thanks 
Wanted to know what is the difference between a single pane of glass and a glass table, searches in a single pane of glass .    
I have my log4j2.xml as below,   <?xml version="1.0" encoding="UTF-8"?> <Configuration status="info" name="example" packages="com.splunk.logging"> <Appenders> <SplunkHttp ... See more...
I have my log4j2.xml as below,   <?xml version="1.0" encoding="UTF-8"?> <Configuration status="info" name="example" packages="com.splunk.logging"> <Appenders> <SplunkHttp name="splunk" url="http://localhost:8088" token="sometoken" index="someindex" messageFormat="text" source="somesource" sourceType="log4j" batch_size_count="1" disableCertificateValidation="true" > <PatternLayout pattern="%m"/> </SplunkHttp> </Appenders> <Loggers> <Root level="INFO"> <AppenderRef ref="splunk"/> </Root> </Loggers> </Configuration>   I'm trying to set up Splunk with HEC on an EC2 instance. The same configuration works for a Splunk instance on my Windows machine. I used tcpdump to trace packets on port 8088 and it seems there is no packet reaching to that port. Did I miss anything on the configuration? Thank you!
I have a  issue with one index in which the bucket is corrupted and I lost logs from this index for a period of time. How can I fix this.
This is the command run as  a ps1 script pushed out by Airwatch: msiexec.exe /i C:\Windows\Temp\splunk\splunkforwarder-8.0.6-152fb4b2bb96-x64-release.msi RECEIVING_INDEXER="<indexer>" WINEVENTLOG_SE... See more...
This is the command run as  a ps1 script pushed out by Airwatch: msiexec.exe /i C:\Windows\Temp\splunk\splunkforwarder-8.0.6-152fb4b2bb96-x64-release.msi RECEIVING_INDEXER="<indexer>" WINEVENTLOG_SEC_ENABLE=1 WINEVENTLOG_SYS_ENABLE=1 MONITOR_PATH=“C:\Program Files\osquery\logs\osqueryd.results.log” MONITOR_PATH=“C:\ProgramData\Airwatch\UnifiedAgent\logs\” AGREETOLICENSE=Yes /quiet. It fails but Airwatch gives no clue as to why.  The script can be run from cmd prompt as admin and it executes as expected. When run from powershell cmd locally or through Airwatch this pops up on the remote endpoint. Has something changed in the last few days to make this happen? This install has been working for quite a while. Some windows updates?
Has anyone presented this error message? "External handler failed with code "1! and output REST ERROR[400] Bad Request - An error occurred (AccessDenied) when calling the ListQueues operation: Acces... See more...
Has anyone presented this error message? "External handler failed with code "1! and output REST ERROR[400] Bad Request - An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://queue.amazonaws.com/ is denied. See splunkd.log for stderr output."  
Hello,  I'm not so good with REX formula, if someone can help me and give me some tip for next time, I appreciate, thanks. I need to extract the balance (50446.50), the info is in field1 'Col1' End... See more...
Hello,  I'm not so good with REX formula, if someone can help me and give me some tip for next time, I appreciate, thanks. I need to extract the balance (50446.50), the info is in field1 'Col1' Endpoint[442] 'DO' Wallet[501] 'Trilogy' Balance '50446.50 USD'
Basically, I have a problem in which I want to run two queries the first query will return me the total number of requests and the second query will return requests that fail so that i can calculate ... See more...
Basically, I have a problem in which I want to run two queries the first query will return me the total number of requests and the second query will return requests that fail so that i can calculate the percentage but I am unable to do this with a subquery.   Currently, I am using this query   "Carrier Failure: provider_name=*" | dedup application_id | stats count AS total_carrier_errors | append [search host="prod-celery-gateway-0*" sourcetype="supervisor" "driver dispatch_request: Sending request to" NOT failed | stats count AS total_requests] | table total_carrier_errors total_requests | eval carrier_errors_percent=(total_carrier_errors/total_requests*100) Can anyone guide me with this? Thank You!
I want my users to only have read access to service definitions and thresholds. I created a new role with all of the read access capabilities in ITSI. Users with the new role should be able to see va... See more...
I want my users to only have read access to service definitions and thresholds. I created a new role with all of the read access capabilities in ITSI. Users with the new role should be able to see various items and not edit them. However, when a user tries to view service definitions and thresholds, the page does not load. 
Suppose we're setting a multisite indexer cluster with 4 nodes in site1 and 3 nodes in site2:   [clustering] multisite = true available_sites = site1,site2 site_replication_factor = origin:1, total... See more...
Suppose we're setting a multisite indexer cluster with 4 nodes in site1 and 3 nodes in site2:   [clustering] multisite = true available_sites = site1,site2 site_replication_factor = origin:1, total:2 site_search_factor = origin:1, total:2     What happens if we loose for instance site2 given that all sites are non-explicit sites? According to my understanding of the documentation the cluster fix-up process will "reserve" bucket copies in site1 in preparation for the return of site2 peers given that total - explicit sites equals 2 i.e. "the search and replication factors are sufficiently large" as the documentation says:  For non-explicit sites, the cluster reserves one searchable copy if the total components of the site's search and replication factors are sufficiently large, after handling any explicit sites, to accommodate the copy. (If the search factor isn't sufficiently large but the replication factor is, the cluster reserves one non-searchable copy.)   Is my understanding of the documentation correct? or i'm missing something? Is there any failover timer that could be configure so the cluster fix-up process gives some room for site2 to recover before the "reserve" bucket copies start to be created? Lastly should we reserve some storage in site1 to accommodate for an event where "reserve" bucket copies are created? Is there any golden number that we could use for the amount of storage that should be reserved? Thanks in advance
Hi guys, I´ve been trying to add data on my HF and when I get to submit my input I receive this error: I do not know what "could not find writer for: /nobody/..." means... Could you please help... See more...
Hi guys, I´ve been trying to add data on my HF and when I get to submit my input I receive this error: I do not know what "could not find writer for: /nobody/..." means... Could you please help me figure it out? Thanks in advance.
Hi SPlunkers, We have multiple sources reporting to same index, what we observe is for few sources we can see the searchable earliest event from the month of September and for few sources the earlie... See more...
Hi SPlunkers, We have multiple sources reporting to same index, what we observe is for few sources we can see the searchable earliest event from the month of September and for few sources the earliest event from August. Does different source have different retention policies? or the data is stored in different buckets which are searchable   Retention policy for the index: Setting Value maxTotalDataSizeMB 307200 frozenTimePeriodInSecs 188697600 homePath.maxDataSizeMB 0 coldPath.maxDataSizeMB 0
Am getting multiline events. below events should be shown separately for each time. how to fix this . on my props.conf i have the below config. but it isn't working BREAK_ONLY_BEFORE_DATE=true TIME... See more...
Am getting multiline events. below events should be shown separately for each time. how to fix this . on my props.conf i have the below config. but it isn't working BREAK_ONLY_BEFORE_DATE=true TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3NZ MAX_TIMESTAMP_LOOKAHEAD = 24 14/10/2020 19:46:58.035   2020-10-14T19:46:58.035+01:00 INFO [r] (default task-11) Checking user auth. [token(abbr.)=null,xForwardedFor=,remoteAddr=11.18xxx] 2020-10-14T19:46:58.035+01:00 INFO [] (default task-82) Checking user auth. [token(abbr.)=null,xForwardedFor=,remoteAddr=11.18xxx] 2020-10-14T19:46:58.035+01:00 INFO [] (default task-11) User token loaded. [windowsId=null,internal=false,externalOrigin=false,status=TOKEN_IS_NULL_OR_EMPTY] 2020-10-14T19:46:58.035+01:00 INFO [sso.AuthFilter
I have something like 20+ alerts that give my team telemetric data on our ESX and Storage clusters. We collect our metrics from a data bus via API calls and then send them into splunk for analysis.  ... See more...
I have something like 20+ alerts that give my team telemetric data on our ESX and Storage clusters. We collect our metrics from a data bus via API calls and then send them into splunk for analysis.  Sometimes when the team that manages the data bus has an issue my reports don't trigger unless I manually run them.  I want to know if there is a way that I can create a query that will continuously run every hour until that alert completes with results.  
If you have issues where the Sophos sourcetype is not extracting the source webserver & malware signature from web activity events, add this line to pull those events. I couldn't find a solution for... See more...
If you have issues where the Sophos sourcetype is not extracting the source webserver & malware signature from web activity events, add this line to pull those events. I couldn't find a solution for this problem, so here's mine: "Access was blocked to \"(?<origin>[^\"]+)\" because of \"(?<threat>[^\"]+)\"." This'll make use of the already created but null fields, origin & threat.
I am building a table displayed in a splunk dashboard that needs a complicated query and I was hoping to get a quick pointer in the right direction. Let's say I have a table of events at different l... See more...
I am building a table displayed in a splunk dashboard that needs a complicated query and I was hoping to get a quick pointer in the right direction. Let's say I have a table of events at different levels per person. id event_id level_id user_id 1 1 1 1 2 1 2 1 3 1 1 2 4 1 2 2 5 1 3 2 6 1 1 3   For these events, only the highest level event is relevant per person. How would I construct a query that shows the following stats, side by side: 1) the total number for each event at each level (3rd column below, I have this part) and  2) the number of users at a specific event level (4th column below) this is basically the count of the highest level per event per person The results I'm looking for would look like: event_id level_id total_occured total_users_at_level 1 1 3 1 1 2 2 1 1 3 1 1   Any help would be greatly appreciated. Thanks!
Hello,   I am trying to create a splunk alert to trigger when it detects an anomaly in the firewall logs based on IDS signature.   I created a pretty good graph that would work well in a dasboard... See more...
Hello,   I am trying to create a splunk alert to trigger when it detects an anomaly in the firewall logs based on IDS signature.   I created a pretty good graph that would work well in a dasboard, but I need it to populate a table or stats on when a outlier is found and which signature it is.   This is what I have so far: index="firewall" sourcetype="threat" tag=attack action=allowed | bin _time span=4h | eventstats count(signature) as "Count" by _time | eventstats values(Count) as valu | eventstats count(valu) as help by _time | eventstats median(Count) as med | eval newValue = abs(Count-med) | eventstats median(newValue) as medianAbsDev by signature | eval upper = med+(medianAbsDev*1.1) | eval lower = 0 | eval isOutlier=if(Count < lower OR Count > upper, 1,0) | timechart count span=1h count(signature) as CountOfIndicator, eval(values(upper)) as upperl, eval(values(lower)) as lowerl, eval(values(isOutlier)) as Outliers by signature usenull=f useother=f |filldown     I just need to be able to identify the outliers in a table so I can have it generate an alert when the query has results.
Hi, I would like to ask about his problem: I have HF, from which I need forward its audit log ($SPLUNK_HOME/var/log/splunk/audit.log) to external syslog server. I made this config: inputs.conf [m... See more...
Hi, I would like to ask about his problem: I have HF, from which I need forward its audit log ($SPLUNK_HOME/var/log/splunk/audit.log) to external syslog server. I made this config: inputs.conf [monitor://$SPLUNK_HOME/var/log/splunk/audit.log] sourcetype = send_to_syslog props.conf [send_to_syslog] TRANSFORMS-test_internal_logs_syslog = test_internal_logs_syslog transforms.conf [test_internal_logs_syslog] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = test_internal_logs_syslog outputs.conf [syslog] defaultGroup = test_internal_logs_syslog [syslog:test_internal_logs_syslog] disabled = false server = tpsmscs02:2601 type = tcp priority = NO_PRI maxEventSize = 16384 Problem is, that not only audit log is forwarded, but all incoming logs as well, which is not desired. So I removed "defaultGroup = test_internal_logs_syslog" from outputs.conf - and then neither audit log or anything else is forwarded, simply nothing. AFAIK my config without defaultGroup = test_internal_logs_syslog should work... Could someone check it and tell me what I am doing wrong? Thanks in advance. Best regards Lukas
Hello,   I have a <panel> <chart> that has extremely skinny columns on a simple column chart. What is the simplest way to increase the width or thickness of the actual columns in the chart? Seems t... See more...
Hello,   I have a <panel> <chart> that has extremely skinny columns on a simple column chart. What is the simplest way to increase the width or thickness of the actual columns in the chart? Seems to have large enough gaps in the chart to accommodate readable columns.  Thanks, 
Looking for insight as to how people manage when you have macros and other knowledge objects and new logs can get added without us knowing. We have a number of marcos and then a new log is added; whi... See more...
Looking for insight as to how people manage when you have macros and other knowledge objects and new logs can get added without us knowing. We have a number of marcos and then a new log is added; which we do not always know, we can miss items due to the filtering within the macro/search on a field extraction. The logging standards are good but we just have new items.    I was thinking of doing a check of the field extractions to find differences through a quick search or some type of lookup; which can then be used to get dashboard items easier which we use a search for now.