All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Okay, so this is quite theorectical.... the nature of this search is to basically count the Incoming Domains when there is greater than 200 unique emails.  Then, I need to count the outgoing Domains ... See more...
Okay, so this is quite theorectical.... the nature of this search is to basically count the Incoming Domains when there is greater than 200 unique emails.  Then, I need to count the outgoing Domains when they MATCH those domains and do a count and compare a % of when the conversation is let's say >30%-45%... What are we doing with this? Answer: We are going to count the domains that send IN email with which we clearly RESPOND to, and without getting into the mixture of RE:'s and FWD's... Out of Office Replies... these will and should NOT make up 45% of the conversation... plus we have to be careful NOT to consider any emails with which are NEW conversations... 1-2 emails in, compared to 1 reply back is not 50% in this search... we have to look at a wide bearth of time to determine the real senders, but get rid of the apple.card, itunes.com, apple.com and things that "SEND" large quantities of emails, but do not have a formal conversation to them. Do you have a search to play with on this? Answer: Kind of... which is why I'm posting here hoping for a better logic     index=email "filter.routeDirection"=inbound | rex field=envelope.from "\@(?<domainIN>[^ ]*)" | stats dc(envelope.from) as num_count by domainIN ```Possible combination``` | join type=inner domainIN [search index=email sourcetype=pps_messagelog "filter.routeDirection"=outbound | rex field=msg.header.to "\@(?<domainIN2>.*)"\>] | stats dc(msg.header.to) as num_count2 by domainIN2 | where count >200     Data is Proofpoint using the sourcetype=pps_messagelog  Above is the capture regex of the domains seen where the routeDirection=inbound.  Now i need to compare the same domains seen in domainIN as the opposite direction... I could see where a "keep a count of large senders" lookup could be good, but the end goal of this is to simply make a list of the domains we KNOW we talk to... this will then get consumed in other security stacks as a method to determine "THIS IS A FRIEND" basically.  You could as a Use-Case send that list to a Threat Intel platform for "domain watcher" status to determine the "look-a-like" domains that could pop up, or if they got compromised you could KNOW it's a threat to you as well...  I hope that makes sense.  If not, I can answer questions, and hopefully my brain can help erode the terrible code you see above... cause it's not working..!!
Hey, I'm very experienced using Splunk as an analyst, but not at all experienced on the admin side of things, but am trying to learn.  I was recently given a JSON file full of Windows Logs to analyze... See more...
Hey, I'm very experienced using Splunk as an analyst, but not at all experienced on the admin side of things, but am trying to learn.  I was recently given a JSON file full of Windows Logs to analyze.  Not sure why they gave me the data that way, but they did, and that's how I have to use it.   When I try and upload the file to Splunk, I select "Add Data", I upload the file, and it does not recognize it as JSON.  If I select json_no_timestamp, it seems to recognize it, but doesn't break it up into events.  Every event starts the same way, and I copied the first 12 lines of JSON below (when auto-arranged).  Using Regex101, I found a Regex that matches the beginning of the event, but adding that into Event Breaks Pattern does not break the event.   I've tried the following Event Breaks Patterns because sometimes when you copy the lines, there is whitespace, and sometimes there is no whitespace (Splunk, Atom, and Regex101 show line breaks and whitespace, but when I copied it into this comment... no line breaks!  Unsure if that's b/c of presentation or just copy/paste): \{\s\"sort\"\: {\n\s+\"sort\" \{\r\n\s+\"sort\"\: { "sort":   { "data": [ { "sort": [ 0 ], "_score": null, "_type": "winevtx", "_index": "winevtx", "_id": "==", "_source": { "process_id": 488, "message": "A Kerberos service ticket was requested.", "provider_guid": "{}", "log_name": "Security", "source_name": "Microsoft-Windows-Security-Auditing", "event_data": { "TicketOptions": "0x60810010", "TargetUserName": "JOHN$@LOCAL.LOCAL", "ServiceName": "krbtgt", "IpAddress": "::ffff:10.10.0.1", "TargetDomainName": "LOCAL.LOCAL", "IpPort": "53782", "TicketEncryptionType": "0x12", "LogonGuid": "{}", "TransmittedServices": "-", "Status": "0x0", "ServiceSid": "S-1-5-21-3052363079-1128767895-2942130287-502" }, "beat": { "name": "LOCAL", "version": "5.2.2", "hostname": "LOCAL" }, "thread_id": 1096, "@version": "1", "@metadata": { "index_local_timestamp": "2017-04-20T06:27:21.283576", "hostname": "LOCAL", "index_utc_timestamp": "2017-04-20T06:27:21.283576", "timezone": "UTC+0000" }, "opcode": "Info", "@timestamp": "2017-04-20T06:25:33.801Z", "tags": [ "beats_input_codec_plain_applied" ], "type": "wineventlog", "computer_name": "LOCAL.LOCAL.local", "event_id": 4769, "record_number": "127898", "level": "Information", "keywords": [ "Audit Success" ], "host": "LOCAL", "task": "Kerberos Service Ticket Operations" } } ] }     Every event starts with { "sort": [ 0 ], so I know that's where I want to break it up.  I'm sure I'm missing something simple.  What is it? Appreciate any assistance.
hello In my search I use an eval command like below in order to identify character string in web url | eval Kheo=case( like(url,"%SLG%"),"G", like(url,"%SLK%"),"G", like(url,"%SLY%"),"I... See more...
hello In my search I use an eval command like below in order to identify character string in web url | eval Kheo=case( like(url,"%SLG%"),"G", like(url,"%SLK%"),"G", like(url,"%SLY%"),"I", like(url,"%SK%"),"T", like(url,"%SL%"),"E" ) | search Kheo=* The problem I have is that my eval identify every url which conatains for example "SLG" letters in lowercase or uppercse My need is to strictly identify URL which contains "SLG" letters in uppercase I tried with match but it changes nothing | eval Kheo=case( match(url,"SLG"),"G",  could you help please?
Hello everyone, A query, I have the following problem where a query is made to a specific index and sourcetype at a certain time and if the next day I execute that query again, the number of events ... See more...
Hello everyone, A query, I have the following problem where a query is made to a specific index and sourcetype at a certain time and if the next day I execute that query again, the number of events is less. It is worth mentioning that it is not possible to see any corrupted buket, or that the space of the indexers is full, which could cause a loss of information. Excuse the translation by google
Hi,  After reviewing most of the posts and not finding a solution. I finally came here to ask for help related to my query problem.  I have a lookup table which runs sweeps to check if logs are m... See more...
Hi,  After reviewing most of the posts and not finding a solution. I finally came here to ask for help related to my query problem.  I have a lookup table which runs sweeps to check if logs are missing in any particular index/host. My query was working like a charm from last two years, but suddenly, it started to show FPs.  My query =  | inputlookup mylookuptable.csv | table index sourcetype host | join index sourcetype host type=left [| tstats count where index=* sourcetype=* host=* by _time index sourcetype host | stats count by index sourcetype host] | fillnull value=0 | search count = 0 Can someone please help me understand why its not displaying results for those values only if any index/sourcetype is missing logs?
I have a trial version of Splunk Cloud (Classic Experience) and I tried to upload/install a private app, as described here: http://docs.splunk.com/Documentation/SplunkCloud/8.2.2202/Admin/PrivateApps... See more...
I have a trial version of Splunk Cloud (Classic Experience) and I tried to upload/install a private app, as described here: http://docs.splunk.com/Documentation/SplunkCloud/8.2.2202/Admin/PrivateApps. I went to "App Management" and clicked the "Upload App button". After I entered my splunk.com credentials, checked the T&C box and clicked the Login button, I see a POST call to https://<my-trial-instance>.splunkcloud.com/en-US/splunkd/__raw/services/uploaded-apps/package?output_mode=json but it returned a 404. I also saw an error message: Error logging in. Enter your Splunk.com username and password. Splunk Cloud requires these credentials to complete app validation before installing your app. I'm not sure why I received a 404. It's not even checking my credentials, even though I entered them correctly. Any help would be appreciated. Thanks, Jason  
Hello Community, I would like to add trailing zeros in front of a value, but only display 5 characters for the value. In addition, I would want to add a prefix of "ABC-". I have no issue with the p... See more...
Hello Community, I would like to add trailing zeros in front of a value, but only display 5 characters for the value. In addition, I would want to add a prefix of "ABC-". I have no issue with the prefix, but the zeros I would need assistance. I could add 4 zeros in front of the value and then trim the value for displaying last 5 characters, but I wanted to see the cleanest way to accomplish. Examples below. Value = 876 I would like the new value to be ABC-00875. Value = 1678 I would like the new value to be ABC-01678. Value = 5 I would like the new value to be ABC-00005.   Thanks, Joe
Dear All, I want to install an external app within the Splunk instance of our client, the problem I have is that with my access account to this instance it does not allow me to install applications... See more...
Dear All, I want to install an external app within the Splunk instance of our client, the problem I have is that with my access account to this instance it does not allow me to install applications, so my question is how or what account should be used to install external applications ? In my case I want to install the Splunk Add-on for linux monitoring app. In addition, does the installation of this application require a license or an additional cost to the one already purchased within Splunk? First of all, Thanks.
Hi I am trying to automate alert set up for splunk alerts . I am using splunk tf provider : https://registry.terraform.io/providers/splunk/splunk/latest/docs/resources/saved_searches#argument-referen... See more...
Hi I am trying to automate alert set up for splunk alerts . I am using splunk tf provider : https://registry.terraform.io/providers/splunk/splunk/latest/docs/resources/saved_searches#argument-reference   We have couple of actions  when an alert is generated like email , slack message & call out . Call out is using custom action called xmatter. Curl command creates alert well curl -ks -u username:password https://<splunkurl>:8089/servicesNS/nobody/digital_dcps_sre_search/saved/searches -d name=No_Memory_Left -d cron_schedule="*/5 * * * *" -d description="This test job is a durable saved search" -d dispatch.earliest_time="-24h@h" -d dispatch.latest_time="now" -d action.digital_slack="1" -d action.digital_slack.param.channel="#dcps-sre-alerts" -d action.digital_slack.param.message="Kong DP Alert .This line indicates that the data plane instance is trying to read a config from the control plane that is bigger than the config cache shared memory location. This means the data plane can no longer receive configuration updates." -d action.digital_slack.param.workspace="rbwm" -d alert.track="true" -d alert_comparator="greater than" -d alert_threshold="1" -d is_scheduled="true" -d alert_type="number of events" -d action.abc_xmatters_alerts="0" -d action.abc_xmatters_alerts.param.key="xxxxxx" -d action.abc_xmatters_alerts.param.severity="MINOR" -d action.abc_xmatters_alerts.param.summary="text=UK TP Splunk Alert. '$name$' alert was triggered. $result.final_gateway_url$. Link: $results_link$\napplication=Technical-Platform-Engineering_UK" -d actions="digital_slack" -d alert.digest_mode="true" -d alert.expires="5d" -d alert.severity="4" -d alert.suppress="true" -d alert.suppress.period="60m" -d description="This line indicates that the data plane instance is trying to read a config from the control plane that is bigger than the config cache shared memory location. This means the data plane can no longer receive configuration updates" -d disabled="false" --data-urlencode search="search index=digital_technical_onprem_kongdp_raw sourcetype=SystemErr \\[clustering\\] unable to update running config: no memory"   But it fails if I try to use splunk tf provider because action_param_key etc are not supported . Is there anyway I can set customer action in alert using tf ?   resource "splunk_saved_searches" "No_Memory_Left" { cron_schedule = "*/5 * * * *" dispatch_earliest_time = "-24h@h" dispatch_latest_time = "now" #action_slack = "1" action_slack_param_channel = "#dcps-sre-alerts" action_slack_param_message = "Kong DP Alert .This line indicates that the data plane instance is trying to read a config from the control plane that is bigger than the config cache shared memory location. This means the data plane can no longer receive configuration updates. Refer " #action_slack_param_workspace = "rbwm" alert_track = "true" alert_comparator = "greater than" alert_threshold = "1" is_scheduled = "true" alert_type = "number of events" #action_abc_xmatters_alerts = "0" #action_param_key = "xxxxxx" #action_param_severity = "MINOR" #action_param_summary = "text=UK TP Splunk Alert. '$name$' alert was triggered. $result.final_gateway_url$. Link: $results_link$\napplication=Technical-Platform-Engineering_UK" actions = "digital_slack" alert_digest_mode = "true" alert_expires = "5d" alert_severity = "4" alert_suppress = "true" alert_suppress_period = "60m" description = "This line indicates that the data plane instance is trying to read a config from the control plane that is bigger than the config cache shared memory location. This means the data plane can no longer receive configuration updates" disabled = "false" search = "search index=digital_raw sourcetype=SystemErr \\[clustering\\] unable to update running config: no memory" name = "No_Memory_Left" acl { app = "digital_dcps_sre_search" owner = "GB-SVC-DSRE-SPL" sharing = "app" } }
Hi all I have a riddle. Query A and query B does not collect the same events and I don’t understand why. Query A) results 2 events as transaction   | multisearch [search (11111111 OR 22222222... See more...
Hi all I have a riddle. Query A and query B does not collect the same events and I don’t understand why. Query A) results 2 events as transaction   | multisearch [search (11111111 OR 22222222) host=x index=y level=z (logger=a "text_a") ] [search (11111111 OR 22222222) host=x index=y level=z (logger=b message="text_b") ] | rex field=_raw "<sg: ID>(?<ID>.*?)<" | transaction ID keepevicted=false startswith="text_a" endswith=message="text_b"     Query B) results 1 event as transaction   | multisearch [search (11111111 OR 22222222) host=x index=y level=z (logger=a "text_a") ] [search host=x index=y level=z (logger=b message="text_b") ] | rex field=_raw "<sg: ID>(?<ID>.*?)<" | transaction ID keepevicted=false startswith="text_a" endswith=message="text_b"     11111111 and 22222222 is used as an ID to test the query and to confirm the correctness. But if I remove these IDs from second search like in query B) than I get only one result, the other is missing.   I thought at the first time, it is because of the enormous amount of records. I used a time filter to reduce the records, at the end with 19.351 events. Unfortunately it didn’t help. Of course, if I replace the multisearch to OR, it works. Query C) If I move the ID filter in second search, booth events are there.   | multisearch [search host=x index=y level=z (logger=a "text_a") ] [search (11111111 OR 22222222) host=x index=y level=z (logger=b message="text_b") ] | rex field=_raw "<sg: ID>(?<ID>.*?)<" | transaction ID keepevicted=false startswith="text_a" endswith=message="text_b"     Query D) Just to be sure, if I remove the "text_a" and message="text_b" from search the event is still missing.   | multisearch [search (11111111 OR 22222222) host=x index=y level=z logger=a ] [search host=x index=y level=z logger=b ] | rex field=_raw "<sg:ID>(?<ID>.*?)<" | transaction ID keepevicted=false startswith="text_a" endswith=message="text_b"     Maybe someone of you already had similar issues in transaction with multi-search and know, what could cause this problem. Thank you for your answers. Best regards, Robert
Hello guys, I need your help. I'm trying to connect Cisco AMP via Cisco AMP for Endpoints Events 2.0.2 Input via REST API, it doesn't work in the logs, an error constantly appears ERROR Amp4eEvents ... See more...
Hello guys, I need your help. I'm trying to connect Cisco AMP via Cisco AMP for Endpoints Events 2.0.2 Input via REST API, it doesn't work in the logs, an error constantly appears ERROR Amp4eEvents - API Error (status 429): {"error":{"code":"429","message":"RATE LIMIT EXCEEDED next slot in 41m38s"}}     Is it possible to set a request limit somewhere? Or maybe the problem is something else?    
We are moving away from using Windows Event Collection to installing the Universal Forwarder on as many Windows machines as we can. I ran into an interesting issue that I don't know how to resolve. ... See more...
We are moving away from using Windows Event Collection to installing the Universal Forwarder on as many Windows machines as we can. I ran into an interesting issue that I don't know how to resolve. Event 1646, when collected using WEC and then forwarded to Splunk shows this information, which doesn't appear if the same event is sent directly by the UF. I copied the stanza used by the UF on the WEC server and deployed it to the machine where the event is generated but I am still not seeing the "extra" data when not using WEC. What am I missing? (Something easy, no doubt). Seems as though I don't see the "Message" field when the event is collected by the UF. Thanks in advance.
We have data coming in and we are still searching for a best practice on what alerts to monitor, however, my question is on the query below:  index="storage_vmax" sourcetype="dellemc:vmax:rest" typ... See more...
We have data coming in and we are still searching for a best practice on what alerts to monitor, however, my question is on the query below:  index="storage_vmax" sourcetype="dellemc:vmax:rest" type=ARRAY severity = FATAL |search (severity!=NORMAL AND severity!=INFORMATION) | stats count by _time,created_date,source,reporting_level,severity,asset_id,array_id,type, state,description Where I would like to bring in only what was created in the last 24 hours.. The problem with the existing query is that it is bring in created log entries from a year ago which are stale. If we are going to have SNOW open tickets we do not want it to so on stale data only new. Thanks, Dali  
Hi I need to count time events between now() and now() - 10 minutes Something like this : eval delta =now() - 10 minutes  Couleur you help please ?  
I'm trying to create a search macro which accepts a field to match on and enriches the results with matches and outputs those enriching fields appending the matching value's matching field name as th... See more...
I'm trying to create a search macro which accepts a field to match on and enriches the results with matches and outputs those enriching fields appending the matching value's matching field name as the new field names. For example: `my_macro(sourceAddress)` Should output the following field names (if it matches): sourceAddress_WHOIS sourceAddress_Severity sourceAddress_lastCheck Where WHOIS, Severity, and lastCheck are field names in the lookup table. This should also exhibit the same behavior, dynamically, for `my_macro(destinationAddress)`: destinationAddress_WHOIS destinationAddress_Severity destinationAddress_lastCheck This macro may be called multiple times against multiple field names in a single search.  destinationAddress, sourceAddress, clientAddress, proxyAddress, and more are all potential field names in the searches this macro would be used for and multiple combinations of each can potentially exist in each result.  I'd like to be able to clearly see which fields were enriched by the lookup table, if enrichment occurred.
Disclaimer: Totally new to Splunk.  Started using it this week and nobody else in my office knows Splunk either. I created dashboards for Windows events like this one:  EventCode=4625 | timechart c... See more...
Disclaimer: Totally new to Splunk.  Started using it this week and nobody else in my office knows Splunk either. I created dashboards for Windows events like this one:  EventCode=4625 | timechart count by host sep=1hr.  That shows a nice bar chart which gives information, like the number of events, when hovering the mouse over a bar.  I want to either/or:  1.) click on a bar and show all the event(s) information.  2.) display all the events in another panel in the dashboard.  Thank you for you assistance.
I'm currently trying to upload a malware feed into Threat Intelligence Management. The feed itself is being pulled from the following URL: https://bazaar.abuse.ch/export/csv/recent/ The issue is th... See more...
I'm currently trying to upload a malware feed into Threat Intelligence Management. The feed itself is being pulled from the following URL: https://bazaar.abuse.ch/export/csv/recent/ The issue is that while it is in CSV format, the values themselves are also encapsulated by quotes, so they are being imported into the file_intel like the following. To extract out the actual values since they are surrounded by quotes I put together a regular expression under "Extracting regular expression" which works on regexr and regex101, but this regular expression does not appear to be getting used as the values in the lookup still look like the above.   Here is what the csv looks like. Is there a setting I am missing that is causing the regex to not be utilized?
Splunk Enterprise 8.0.4.1 There was a low disk space issue and Health Status alert was raised as expected. But now there is plenty of disk space and the message says: 04-22-2022 15:05:05.257 +... See more...
Splunk Enterprise 8.0.4.1 There was a low disk space issue and Health Status alert was raised as expected. But now there is plenty of disk space and the message says: 04-22-2022 15:05:05.257 +0000 WARN DiskMon - MinFreeSpace=5000. The diskspace remaining=221121 is less than 2 x minFreeSpace 221121 is definitely not less than than 2x5000 Am I missing something or is it a bug?
presuming there are limits (which may have changed over time), what are the current default limits for search exports from splunk web? Is it record count or search job size in bytes?
Hi All. We have a need to log only one event in Splunk for each Case_ID. However a single case can have multiple problems and solutions entered by the user in our Website. And based on event in Spl... See more...
Hi All. We have a need to log only one event in Splunk for each Case_ID. However a single case can have multiple problems and solutions entered by the user in our Website. And based on event in Splunk we need to publish some metrics in the dashboard. Need suggestion for better way to log Problem solution combination in a single event for a case_id; which can help regenerate the table format within Splunk using query effectively to further populate the dashboard metrics shown in below screenshots. Please assist.