All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Getting day of week from time is simple with | eval dow=strftime(_time, "%w") which gives you a value 0-6 (0=Sunday). Remember that mod will only work for you if your time component is the same - ... See more...
Getting day of week from time is simple with | eval dow=strftime(_time, "%w") which gives you a value 0-6 (0=Sunday). Remember that mod will only work for you if your time component is the same - in your example it's 5am, so 5am thursday is 5 * 3600 = 18000 whereas strftime will simply give you dow. Another function that is useful for determining the range of the search is addinfo, which adds some fields to the event https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/SearchReference/addinfo so you can know the search start and end time range. From there you know what day the user has chosen to start the search from. Here is something you can do to filter out the data needed before the timechart - it effectively removes all data that is not in the correct day | addinfo | where strftime(info_min_time, "%w")=strftime(_time, "%w") | fields - info* put that before the timechart. However, if you can select other options that weekly, your timechart and day-of-week will not be relevant any more. 
As I said on Reddit, you should take a look at Mothership. https://splunkbase.splunk.com/app/4646 
I think I figured it out index=original_index | addinfo | eval search_now=info_max_time | eval _raw=printf("_time=%d", info_min_time) | foreach "*" [| eval _raw = _raw.case(isnull('<<FIELD>>'),"",... See more...
I think I figured it out index=original_index | addinfo | eval search_now=info_max_time | eval _raw=printf("_time=%d", info_min_time) | foreach "*" [| eval _raw = _raw.case(isnull('<<FIELD>>'),"", mvcount('<<FIELD>>')>1,", <<FIELD>>=\"".mvjoin('<<FIELD>>',"###")."\"", true(), ", <<FIELD>>=\"".'<<FIELD>>'."\"") | fields - "<<FIELD>>" ] | collect index=summary testmode=false file=summary_test_1.stash_new name=summary_test_1" marker="report=\"summary_test_1\""  
Trying to setup splunk otel collector using the image quay.io/signalfx/splunk-otel-collector:latest in docker desktop or Azure Container App to read the log from file using file receiver and splunk_h... See more...
Trying to setup splunk otel collector using the image quay.io/signalfx/splunk-otel-collector:latest in docker desktop or Azure Container App to read the log from file using file receiver and splunk_hec exporter. Howerver the receiving following error. 2024-03-07 12:56:27 2024-03-07T17:56:27.001Z info exporterhelper/retry_sender.go:118 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "splunk_hec", "error": "Post  https://splunkcnqa-hf-east.com.cn:8088/services/collector/event\": dial tcp 42.159.148.223:8088: i/o timeout (Client.Timeout exceeded while awaiting headers)", "interval": "2.984810083s"}   using the below config ============================ extensions: memory_ballast: size_mib: 500 receivers: filelog: include: - /var/log/*.log encoding: utf-8 fingerprint_size: 1kb force_flush_period: "0" include_file_name: false include_file_path: true max_concurrent_files: 100 max_log_size: 1MiB operators: - id: parser-docker timestamp: layout: '%Y-%m-%dT%H:%M:%S.%LZ' parse_from: attributes.time type: json_parser poll_interval: 200ms start_at: beginning processors: batch: exporters: splunk_hec: token: "XXXXXX" endpoint: "https://splunkcnqa-hf-east.com.cn:8088/services/collector/event" source: "otel" sourcetype: "otel" index: "index_preprod" profiling_data_enabled: true tls: insecure_skip_verify: true service: pipelines: logs: receivers: [filelog] processors: [batch] exporters: [splunk_hec]
Hi, I am trying to explore APM in Splunk Observability. However, I am facing challenge in setting up Alwayson Profiling. I am wondering if this feature is not available in Trial version. Can someone... See more...
Hi, I am trying to explore APM in Splunk Observability. However, I am facing challenge in setting up Alwayson Profiling. I am wondering if this feature is not available in Trial version. Can someone confirm it?
Hello, How to assign search_now value with info_max_time in _raw? I am trying to push "past" data using collect command into summary index.  I want to use search_now as a baseline time I ap... See more...
Hello, How to assign search_now value with info_max_time in _raw? I am trying to push "past" data using collect command into summary index.  I want to use search_now as a baseline time I appreciate your help.  Thank you Here's my attempt using some code from @bowesmana , but it gave me duplicate search_now:     index=original_index | addinfo | eval _raw=printf("search_now=%d", info_max_time) | foreach "*" [| eval _raw = _raw.case(isnull('<<FIELD>>'),"", mvcount('<<FIELD>>')>1,", <<FIELD>>=\"".mvjoin('<<FIELD>>',"###")."\"", true(), ", <<FIELD>>=\"".'<<FIELD>>'."\"") | fields - "<<FIELD>>" ] | collect index=summary testmode=false file=summary_test_1.stash_new name=summary_test_1" marker="report=\"summary_test_1\""      
Hi @Zoltan.Gutleber, Thanks so much for coming back and sharing the solution!
Can you explain the regex? it looks like (?::) looks for any field identifier... but what about {0}? Is that for zero of that field identifier? I am asking because I am trying the stanza below and it... See more...
Can you explain the regex? it looks like (?::) looks for any field identifier... but what about {0}? Is that for zero of that field identifier? I am asking because I am trying the stanza below and it doesn't seem to be working: [source::(top)|(ps)|(bandwidth)] Context: sending ta_nix data from UF to HF, then using RFS to send to flat file. The stanza aboves simply routes the data to the destination using ingest actions.  Thanks!
Is the Geostats command supported by this visualization type for displaying city names in cluster bubbles? It seems not. Here is the command I am using for my result:     | (some result that prod... See more...
Is the Geostats command supported by this visualization type for displaying city names in cluster bubbles? It seems not. Here is the command I am using for my result:     | (some result that produces destination IP's and a total count by them) | iplocation prefix=dest_iploc_ dest_ip | eval dest_Region_Country=dest_iploc_Region.", ".dest_iploc_Country | geostats globallimit=0 locallimit=15 binspanlat=21.5 binspanlong=21.5 longfield=dest_iploc_lon latfield=dest_iploc_lat sum(Total) BY dest_Region_Country     In the search result visualization (which uses the old dashboard cluster map visualization and not the new dashboard studio one), this returns a proper cluster map showing this: There are bubbles showing areas on the grid where there were a lot of total connections. When moused over I can see the individual regions/cities contributing to this total. However, when I put this query into my Dashboard Studio visualization using Map > Bubble, it either breaks (when there are too many city values... because there are as many cities as there are), or when I change the grouping to use countries for example, it breaks in a different way when it tries to alphabetize all the countries under each bubble, like this: (I am obviously mousing over a bubble in Bogota, Colombia here, not Busan, South Korea or anywhere in Germany.) Not to mention the insane lag caused by this dashboard element. What to do for my use-case? Switch off of Dashboard Studio? That aside, anyone figure out a way to make interconnected bubbles/points showing sources and destinations like this (this is not intended as an ad, but an example)?  
Hi, we have a log that contains the amount of times any specific message has been sent by the user in every session. This log contains the user's ID (data.entity-id), the message ID (data.message-cod... See more...
Hi, we have a log that contains the amount of times any specific message has been sent by the user in every session. This log contains the user's ID (data.entity-id), the message ID (data.message-code), message name (data.message-name) and the total amount of times it was sent during each session (data.total-received). I'm trying to create a table where the 1st column shows the User's ID (data.entity-id), then each column shows the  sum of the total amount of times each message type (data.total-received) was received. Ideally I would be able to create a dashboard were I can have a list of the data.message-code's I want to be used as columns. Example data:    data: { entity-id: 1 message-code: 445 message-name: handshake total-received: 10 } data: { entity-id: 1 message-code: 269 message-name: switchPlayer total-received: 20 } data: { entity-id: 1 message-code: 269 message-name: switchPlayer total-received: 22 } data: { entity-id: 2 message-code: 445 message-name: handshake total-received: 12 } data: { entity-id: 2 message-code: 269 message-name: switchPlayer total-received: 25 } data: { entity-id: 2 message-code: 269 message-name: switchPlayer total-received: 30 } Ideally the table would look like this: Entity-id | handshake | switchPlayer 1 | 10 | 42 2 | 12 | 55   Is this possible? What would be the best way to store the message-code in a dashboard? Thanks!
Hi I am attempting to integrate Microsoft Azure with Splunk Enterprise to retrieve the status of App services. Could someone please provide a step-by-step guide for the integration? I have attach... See more...
Hi I am attempting to integrate Microsoft Azure with Splunk Enterprise to retrieve the status of App services. Could someone please provide a step-by-step guide for the integration? I have attached a snapshot for reference.
Dear Splunk Community,  I am here seeking your thoughts and suggestions on the error I am facing with TrackMe  ERROR search_command.py _report_unexpected_error 1013 Exception at "/opt/splunk/etc/ap... See more...
Dear Splunk Community,  I am here seeking your thoughts and suggestions on the error I am facing with TrackMe  ERROR search_command.py _report_unexpected_error 1013 Exception at "/opt/splunk/etc/apps/trackme/bin/splunkremotesearch.py", line 501 : This TrackMe deployment is running in Free limited edition and you have reached the maximum number of 1 remote deployment, only the first remote account (local) can be used Background: Objective - Setup TrackMe monitoring (a virtual tenant - dsm, dhm & mhm) for our remote Splunk deployment (SplunkCloud). TrackMe app is installed on our on-prem splunk instance and added  "Splunk targets URL and port" under Configuration --> Remote deployments accounts (only one account) No issues with the connectivity, it is successful (pic below)  We are using free license and as per trackme documentation, allowed to use 1 remote deployment. Could we use free license in our case or how to get rid off 'local' deployment? Please suggest.   
Is there documentation which instructs the creation of a custom app that can be uploaded to Splunk Cloud?
Please share your current dashboard source (or at least a cutdown version showing your filters, rows and panels)
@evinasco08  It truly depends on many factors, like number of events, number of forwarders, as well as stakeholder expectations/requirements, and how much time and money you have! https://docs.splu... See more...
@evinasco08  It truly depends on many factors, like number of events, number of forwarders, as well as stakeholder expectations/requirements, and how much time and money you have! https://docs.splunk.com/Documentation/Splunk/latest/Capacity/Referencehardware  https://www.aplura.com/splunk-best-practices/#hardware  The blog post how's you the bandwidth usage difference between universal and heavy forwarders. Universal or Heavy, that is the question? | Splunk The general answer is it depends, however you can refer to https://answers.splunk.com/answers/2014/what-is-the-minimum-network-bandwidth-required-for-splunk-forwarding.html  OR https://answers.splunk.com/answers/340084/how-to-search-how-much-bandwidth-a-forwarder-is-us.html  https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Systemrequirements  https://www.splunk.com/en_us/pdfs/partners/tech-briefs/deploying-splunk-enterprise-on-google-cloud-platform.pdf 
Getting this error via Power Shell for the Splunk Universall installation   Error below The term 'C:\Program Files\SplunkUniversalForwarder\bin\splunk' is not recognized as the name of a cmdlet... See more...
Getting this error via Power Shell for the Splunk Universall installation   Error below The term 'C:\Program Files\SplunkUniversalForwarder\bin\splunk' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. working.ps1:17 char:3   Here is how i defined my variables $SplunkInstallationDir = "C:\Program Files\SplunkUniversalForwarder" & "$SplunkInstallationDir\bin\splunk" start --accept-license --answer-yes --no-prompt   It works if i run manually only.   Kindly assist
hi team What minimum bandwidth is necessary between indexers and the rest of the platform elements (Heavy Forwarders, Search Heads, Master Cluster, License, Deployment Servers, etc.) for different c... See more...
hi team What minimum bandwidth is necessary between indexers and the rest of the platform elements (Heavy Forwarders, Search Heads, Master Cluster, License, Deployment Servers, etc.) for different communications?
We're getting hammered by these. 20k in 24 hours.  
An update: I've gotten pretty close by doing this:   index=index_1 (sourcetype=source_2 earliest=1 latest=now()) OR (sourcetype=source_1 field_D="DeviceType" field_E=*Down* OR field_E=*Up*) | ev... See more...
An update: I've gotten pretty close by doing this:   index=index_1 (sourcetype=source_2 earliest=1 latest=now()) OR (sourcetype=source_1 field_D="DeviceType" field_E=*Down* OR field_E=*Up*) | eval field_AB=coalesce(field_A, field_B) | fields field_D field_E field_AB field_C | stats values(field_E) values(field_C) by field_AB    and then using the UI to sort ascending by field_E. However, there are still many cases where field_E is not populating with a matching field_AB, despite that same field_E matching a different field_AB. To illustrate, see below: field_AB field_E field_C 11122233 Up Down 00002 11111179 Down   11111178 Up 00001 11111178 and 11111179 share 00001 as field_c, but it's not populating. I checked to make sure it wasn't missing in source_2 and found multiple events where field_AB=11111178 and field_C=00001. With that in mind, I'm not sure how to ensure I'm getting all the events from the log besides using the earliest and latest filters, unless I'm misunderstanding how those filters work.
Thank you all, here are updated configurations adapted from the previous ones. NSG logs are written into an Azure Storage Account. Then a Splunk HF reads the logs from the Azure Storage account with ... See more...
Thank you all, here are updated configurations adapted from the previous ones. NSG logs are written into an Azure Storage Account. Then a Splunk HF reads the logs from the Azure Storage account with "Splunk Add-on for Microsoft Cloud Services" and send back to the Indexers.   Configuration applied on the Splunk Heavy Forwarder (can be applied in an Indexer if you don't have an HF) hf_in_azure_nsg_app/default inputs.conf #Inputs is defined directly in the Splunk HF via WEB-UI with "Splunk Add-on for Microsoft Cloud Services" and can be found here /opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/local/inputs.conf props.conf #NOTE: Following set-up allow to extract only the flowTuples from the payload and set _time based on flowTuples epoch #First LINE_BREAKER apply, then SEDCMD-remove_not_epoch that keeps only flowTuples, then TRANSFORMS with INGEST_EVAL that overwrite _time #flowTuples data parsing is done at search time in the Search Head with separate app #The "source" field already contains the resourceId informations (subscriptionID, resourceGroupName, nsgName, macAddress) that can be extracted on the Search Head at search time #NOTE 2: LINE_BREAKER has been enhanced to avoid extracting events with macAddress containing first 10 numeric digits #TO BE DONE: Understand if SEDCMD- has some limit on very huge payload #TO BE DONE 2: In the INGEST_EVAL with a case statement if length is lower than 10 digits valorize now() as _time #References: #https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-overview #https://community.splunk.com/t5/Splunk-Search/How-do-I-import-Azure-NSG-LOGs/td-p/396018 #https://community.splunk.com/t5/Getting-Data-In/How-to-extract-an-event-timestamp-where-seconds-and-milliseconds/m-p/428837 [mscs:nsg:flow] MAX_TIMESTAMP_LOOKAHEAD = 10 LINE_BREAKER = (\")\d{10}\,|(\"\,\")\d{10}\, SHOULD_LINEMERGE = false SEDCMD-remove_not_epoch = s/\"\D.*$|\{|\}|\[|\]//g TRUNCATE = 50000000 TRANSFORMS-evalingest = nsg_eval_substr_time transforms.conf [nsg_eval_substr_time] INGEST_EVAL = _time=substr(_raw,0,10)   Configuration applied on the Splunk Search Head sh_azure_nsg_app/default props.conf #References: #https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-overview #https://community.splunk.com/t5/Splunk-Search/How-do-I-import-Azure-NSG-LOGs/td-p/396018 #https://community.splunk.com/t5/Getting-Data-In/How-to-extract-an-event-timestamp-where-seconds-and-milliseconds/m-p/428837 [mscs:nsg:flow] REPORT-tuples = extract_tuple_v1, extract_tuple_v2 REPORT-nsg = sub_res_nsg FIELDALIAS-mscs_nsg_flow = dest_ip AS dest src_ip AS src host AS dvc EVAL-action = case(traffic_result == "A", "allowed", traffic_result == "D", "blocked") EVAL-protocol = if(match(src_ip, "^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$"), "ip", "unknown") EVAL-direction = case(traffic_flow == "I", "inbound", traffic_flow == "O", "outbound") EVAL-transport = case(transport == "T", "tcp", transport == "U", "udp") EVAL-bytes = (coalesce(bytes_in,0)) + (coalesce(bytes_out,0)) EVAL-packets = (coalesce(packets_in,0)) + (coalesce(packets_out,0)) EVAL-flow_state_desc = case(flow_state == "B", "begin", flow_state == "C", "continuing ", flow_state == "E", "end") transforms.conf [extract_tuple_v1] DELIMS = "," FIELDS = time,src_ip,dest_ip,src_port,dest_port,transport,traffic_flow,traffic_result [extract_tuple_v2] DELIMS = "," FIELDS = time,src_ip,dest_ip,src_port,dest_port,transport,traffic_flow,traffic_result,flow_state,packets_in,bytes_in,packets_out,bytes_out [sub_res_nsg] SOURCE_KEY = source REGEX = SUBSCRIPTIONS\/(\S+)\/RESOURCEGROUPS\/(\S+)\/PROVIDERS\/MICROSOFT.NETWORK\/NETWORKSECURITYGROUPS\/(\S+)\/y=\d+\/m=\d+\/d=\d+\/h=\d+\/m=\d+\/macAddress=(\S+)\/ FORMAT = subscriptionID::$1 resourceGroupName::$2 nsgName::$3 macAddress::$4 eventtypes.conf [mscs_nsg_flow] search = sourcetype=mscs:nsg:flow src_ip=* [mscs_nsg_flow_start] search = sourcetype=mscs:nsg:flow flow_state=B [mscs_nsg_flow_end] search = sourcetype=mscs:nsg:flow flow_state=E tags.conf [eventtype=mscs_nsg_flow] network = enabled communicate = enabled [eventtype=mscs_nsg_flow_start] network = enabled session = enabled start = enabled [eventtype=mscs_nsg_flow_end] network = enabled session = enabled end = enabled   Best Regards, Edoardo