All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am having issues with my deployment app config files. Whenever I edit the config file for my applications in my deployment apps and save it it doesn't replicate to those deployment clients. From my... See more...
I am having issues with my deployment app config files. Whenever I edit the config file for my applications in my deployment apps and save it it doesn't replicate to those deployment clients. From my understanding the clients check their md5 hash of the apps in their folder and the server compares then pushes the new config files. I am not seeing any of my changes propagate because I know that I edited the config file and added event ID 4776 to be blacklisted and I can see it in my search results. How do I fix this?
We migrated a single index to SmartStore about 3 months ago. It appears that since upgrading to v8.0.3 recently, that data retention policies are not applying to local volumes. I see this in splunk... See more...
We migrated a single index to SmartStore about 3 months ago. It appears that since upgrading to v8.0.3 recently, that data retention policies are not applying to local volumes. I see this in splunkd.log: 05-14-2020 09:54:57.707 -0700 WARN VolumeManager - Not trimming volume=splunk_coldStorage. Using maxVolumeDataSizeMB setting is ignored for volumes containing remote-storage enabled indexes. Please revisit your volume settings. From the message in the log, it seems that the smartstore config and local volume configs are in conflict. I am not entirely sure how to correct this. Relevant entries from indexes.conf: my configuration is [volume:s3] storageType = remote path = s3://……. [volume:test_indexes] path = $SPLUNK_DB maxVolumeDataSizeMB = 700000 [AccessProtection] coldPath = volume:test_indexes/AccessProtection/colddb homePath = volume:test_indexes/AccessProtection/db thawedPath = $SPLUNK_DB/AccessProtection/thaweddb repFactor = auto frozenTimePeriodInSecs = 3456000 enableDataIntegrityControl = true maxDataSize = 200 ... [floating-point-index] remote.s3.encryption = sse-kms remotePath = volume:s3/floating-point coldPath = volume:test_indexes/floating-point/colddb datatype = metric homePath = volume:test_indexes/floating-point/db maxTotalDataSizeMB = 512000 repFactor = auto thawedPath = $SPLUNK_DB/floating-point/thaweddb maxDataSize = 200
With multiple admins in our Splunk Cloud, we'd like to see any changes made that have a global or app wide impact. Example: I just deleted a field alias (was: cs_User_Agent_ == http_user_agent). ... See more...
With multiple admins in our Splunk Cloud, we'd like to see any changes made that have a global or app wide impact. Example: I just deleted a field alias (was: cs_User_Agent_ == http_user_agent). Searching _audit and _internal for either of those terms, the only results I can find is searches - I can't actually find the event where it was deleted.
Hi there! I see that for the vpn dashboard to work it looks for tag=vpn. My question is, how can I make sure the logs has been parsed the way infosec is expecting? I can see the connected and disc... See more...
Hi there! I see that for the vpn dashboard to work it looks for tag=vpn. My question is, how can I make sure the logs has been parsed the way infosec is expecting? I can see the connected and disconnected ids from cisco asa are coming. So please, can you point me for the right direction if that is the case? Disconnected logs: May 20 20:25:45 xxx.xxx.xxx.xxx May 20 2020 20:23:55: %ASA-4-113019: Group = Remoe, Username = test.vpn, IP = xxx.xxx.xxx.xxx, Session disconnected. Session Type: AnyConnect-Parent, Duration: 0h:10m:16s, Bytes xmt: 132850266, Bytes rcv: 60187568, Reason: User Requested Connected logs: May 20 20:30:52 192.168.201.252 May 20 2020 20:29:02: %ASA-6-113039: Group <Remote> User <test.vpn> IP <xxx.xxx.xxx.xxx> AnyConnect parent session started. How´s parsed the vpn tag? Thanks in advance.
<row> <panel> <input type="checkbox" token="Registrations"> <label></label> <choice value="Registered Subscribers">Registered Subscribers</choice> </input> </pa... See more...
<row> <panel> <input type="checkbox" token="Registrations"> <label></label> <choice value="Registered Subscribers">Registered Subscribers</choice> </input> </panel> </row> I am thinking something like this: 500 I know this works for inside chart tags but what about panels that have input types? tks
for example I have a 2x2 grid of 4 charts but I only have 3 charts. I want to show the first 3 charts in the first 3 quadrants and blank/empty in the 4th quadrant. This is what i have got ... See more...
for example I have a 2x2 grid of 4 charts but I only have 3 charts. I want to show the first 3 charts in the first 3 quadrants and blank/empty in the 4th quadrant. This is what i have got Code: <panel> <title>blank/delete</title> <chart> <search> <query>blank/delete</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">line</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">minimal</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">right</option> </chart> </panel>
I'm currently trying to build a dashboard that would drill down by site name. Here's an example of the site name: ABC-DEF-PRIV-APJ-AU-SYD. So the drill-down would be APJ(Region)--> ABC(Business... See more...
I'm currently trying to build a dashboard that would drill down by site name. Here's an example of the site name: ABC-DEF-PRIV-APJ-AU-SYD. So the drill-down would be APJ(Region)--> ABC(Business Unit) ---> assets Could someone point me in the right direction to accomplish this? Here's a snippet of the code and visualization that I'm getting: index="lob_data" sourcetype="csv" sitename!="hec*" sitename!="corp*" | where vulnAge > 30 | stats count(IP) as "Total Systems" by sitename,vulnAge
We want to be able to use Splunk as an auditing tool for our groups local and to Active Directory groups. If changes to the groups accur, we want to be able to see that in a Splunk dashboard.
I know my alerts work because I can trigger them by entering the wrong password on purpose. The problem is I am not getting an email. How do I fix that? Also I saw the option to add it to "triggered ... See more...
I know my alerts work because I can trigger them by entering the wrong password on purpose. The problem is I am not getting an email. How do I fix that? Also I saw the option to add it to "triggered alerts." Where does that reside? This is what I got from the _internal index. 05-20-2020 11:52:11.456 -0700 INFO SavedSplunker - savedsearch_id="nobody;search;Enclave Failed Logon AlertTEST", search_type="", user="xxxx", app="search", savedsearch_name="Enclave Failed Logon AlertTEST", priority=default, status=success, digest_mode=0, scheduled_time=1590000452, window_time=0, dispatch_time=1590000453, run_time=275.236, result_count=1, alert_actions="email", sid="rt_scheduler_xxxxsearch_RMD5300c713dc670b306_at_1590000452_240.0", suppressed=0, fired=1, skipped=0, action_time_ms=2405, thread_id="AlertNotifierWorker-0", message="", workload_pool="" python.log: sendemail:475 - [Errno 99] Cannot assign requested address while sending mail to: gen@generic.com
I have a feeling this question will answer a lot of other questions I have. This field - AllowedProtocolMatchedRule - is missing from my Cisco ISE logs. The field is needed to populate data in... See more...
I have a feeling this question will answer a lot of other questions I have. This field - AllowedProtocolMatchedRule - is missing from my Cisco ISE logs. The field is needed to populate data in the Cisco ISE App dashboard. But I have no record of this field, even going back to the beginning of time. I'm not sure how to resolve this problem.
What is the best way to monitor log files that are unique to a host? For example, if hosta has log.x, and hostb has log.y, and so on, what would be the best way to define/import these logs from ea... See more...
What is the best way to monitor log files that are unique to a host? For example, if hosta has log.x, and hostb has log.y, and so on, what would be the best way to define/import these logs from each individual host if they are unique to only one system within an environment?
We are about to enable the enable_memory_tracker feature. We'll use - enable_memory_tracker = true search_process_memory_usage_percentage_threshold = 13 search_process_memory_usage_threshold ... See more...
We are about to enable the enable_memory_tracker feature. We'll use - enable_memory_tracker = true search_process_memory_usage_percentage_threshold = 13 search_process_memory_usage_threshold = 4000 In order to test it, how can I generate searches that consume gigabytes of memory?
I found a different answer article with an example of what I'm trying to do, but I can't get it to work on my end. I'd like to calculate a value using eval and subsearch (adding a column with all ... See more...
I found a different answer article with an example of what I'm trying to do, but I can't get it to work on my end. I'd like to calculate a value using eval and subsearch (adding a column with all row values having this single calculated value). I've replicated what the past article advised, but I'm getting a "Error in 'eval' command: Fields cannot be assigned a boolean result. Instead, try if([bool expr], [expr], [expr])." message. I've also identified that it's the eval with the subsearch causing this, because the query works when removing that function. Past article with same question: https://answers.splunk.com/answers/240798/how-to-return-a-single-value-from-a-subsearch-into.html Here's my query splunk_server=indexer* index=wsi_tax_summary sourcetype=stash intuit_tid=* intuit_offeringid=* provider_id=* partnerId=* capability=* error_msg_service=* http_status_code_host=* ofx_schema_response_error!=null | eval ofx_schema_response_error= [eval statements unimportant for this example] | stats dc(intuit_tid) as schema_error dc(eval(if(error_msg_service="OK", intuit_tid, null()))) as successful_imports by ofx_schema_response_error | eval total_events = [search splunk_server=indexer* index=wsi_tax_summary sourcetype=stash intuit_tid=* intuit_offeringid=* provider_id=* partnerId=* capability=* error_msg_service=* http_status_code_host=* | stats dc(intuit_tid) as total_events | return total_events] | eval failed_imports = schema_error - successful_imports | sort - schema_error Thanks!
Just wondering if its possible to get data volume / size from TSTATS. I know you can do something like this to get counts (events/per sec) | tstats count WHERE index=* by index| eval events_per... See more...
Just wondering if its possible to get data volume / size from TSTATS. I know you can do something like this to get counts (events/per sec) | tstats count WHERE index=* by index| eval events_per_second=count/(3600*24) but how can you use tstats for find volume of data per time?
I'm having no luck building a regex to match cs_usernames . What I'm looking for are two separate searches both based on the cs_username field. The first search is to find all instances whe... See more...
I'm having no luck building a regex to match cs_usernames . What I'm looking for are two separate searches both based on the cs_username field. The first search is to find all instances where the usernames are in all CAPS. The second search is to find usernames that end in at least two digits. Example of logs are below: 2020-05-15 04:58:34 10.140.14.228 POST /NotAvailable.aspx - 80 Gerardot 10.140.15.235 Mozilla/5.0+(Linux;+Android+9;+SM-G960U)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/74.0.3729.136+Mobile+Safari/537.36 302 0 0 15 172.69.69.111 2020-05-15 04:57:19 10.140.14.228 POST /Account/Login.aspx - 80 Kaitlyn1230 10.140.15.235 Mozilla/5.0+(iPhone;+CPU+iPhone+OS+13_4+like+Mac+OS+X)+AppleWebKit/605.1.15+(KHTML,+like+Gecko)+GSA/107.0.310639584+Mobile/15E148+Safari/604.1 200 0 0 46 162.158.75.109 2020-05-15 04:54:24 10.140.14.228 POST /PaymentInfo.aspx - 80 Emulbah 10.140.15.235 Mozilla/5.0+(Windows+NT+6.1;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/81.0.4044.138+Safari/537.36 302 0 0 46 172.68.150.39
Hello All I have a time prefix question Here is my timestamp May 20 10:59:30 svr-orw-nac-01 2020-05-20 17:59:30,646 May 20 11:01:01 svr-ies-nac-02 2020-05-20 18:01:01,389 I am setting ... See more...
Hello All I have a time prefix question Here is my timestamp May 20 10:59:30 svr-orw-nac-01 2020-05-20 17:59:30,646 May 20 11:01:01 svr-ies-nac-02 2020-05-20 18:01:01,389 I am setting props.conf to be the following: [source::/var/log2/gns/nac/log_*] MAX_TIMESTAMP_LOOKAHEAD = 31 TIME_PREFIX = ^\w+\s\d+\s\d+:\d+:\d+\ssvr-.*-nac-\d[01|02]\s TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N Does this look right? Thanks ed
Hello guys, is it OK to use srchMaxTime = 9000, it looks like it does 9000 seconds? In authorize.conf doc it asks for srchMaxTime = <integer><unit> We use Splunk Enterprise 7.1.4 Thanks.
Hi All, I need to create a Splunk License usage report on a daily basis for all the reporting hosts. Can someone please help me with creating search? Host Source Sourcetype ... See more...
Hi All, I need to create a Splunk License usage report on a daily basis for all the reporting hosts. Can someone please help me with creating search? Host Source Sourcetype Index count of events License Used in MB abc pqr xyz main 16405 27 bcd rrs yza wineventlog 123 20 cde rsp urv cisco 2345 105
Hello everyone, I am a beginner in AppDynamics. I am having some doubt in tier and nodes...can anyone explain to me in details with some good examples what are tier and nodes? Thanks in advance... See more...
Hello everyone, I am a beginner in AppDynamics. I am having some doubt in tier and nodes...can anyone explain to me in details with some good examples what are tier and nodes? Thanks in advance 
I'm very new to Phantom. Can someone provide some guidance or advice for naming playbooks and what has worked or hasn't worked? We will be starting with a small team that may grow larger working on v... See more...
I'm very new to Phantom. Can someone provide some guidance or advice for naming playbooks and what has worked or hasn't worked? We will be starting with a small team that may grow larger working on various playbooks for the SOC. I come from a coding background so I'm trying to keep things organized and consistent. I've typically used a folder structure to organize files, but it doesn't appear that this can be done. I see there are other fields we can use, but I'm not sure if we should use these fields for organization for the development of playbooks. There are labels, tags, categories, and Repo's. Anyway, can some experts out there provide some guidance or share your naming conventions and what other fields you're using? I was thinking of something like the following for playbook names: usage_dataType_app_description usage : Who is using it, is this a playbook for the SOC to use or a playbook that's used just by other playbooks to call apps and return data. dataType : Is this for Emails, Web, URL, Files, etc. app : What app this is calling or what we're connecting to (LDAP, API, etc). description : Short, few word description like UrlAnalysis. Thanks, guys.