All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks a lot for your Help and Answers, by the ways how to contact Local Splunk partner or directly to splunk 
Can you describe your question little bit more for us? I'm not sure what you are asking? Splunk can ingest more than PB per day. It just depends on how environment has build and what are it's capac... See more...
Can you describe your question little bit more for us? I'm not sure what you are asking? Splunk can ingest more than PB per day. It just depends on how environment has build and what are it's capacity. Data are stored into buckets on local disks or on S3 bucket e.g. in AWS or equivalent versions on GCP, Azure or onprem. All those are described on docs.splunk.com. If needed you can contact to your local Splunk Partner or directly to Splunk and they can present it to you. There are lots of videos, conf presentations etc. to tell more about Splunk.
How High is the Incoming Data Volume for Monitoring ??? Where are the Data stored ?
Can you share your javascript you generated and added to the app? Did you enable SPA Monitoring in the configuration within AppDynamics etc.?
"There's an app for that" Thank you!
Hello Everyone, I'm currently exploring the Splunk Observability Cloud to send log data. From the portal, it appears there are only two ways to send logs: via Splunk Enterprise or Splunk Cloud. I'm... See more...
Hello Everyone, I'm currently exploring the Splunk Observability Cloud to send log data. From the portal, it appears there are only two ways to send logs: via Splunk Enterprise or Splunk Cloud. I'm curious if there's an alternative method to send logs using the Splunk HTTP Event Collector (HEC) exporter. According to the documentation here, the Splunk HEC exporter allows the OpenTelemetry Collector to send traces, logs, and metrics to Splunk HEC endpoints, supporting traces, metrics, and logs. Is it also possible to use fluentforward, otlphttp, or signalfx or anything else for this purpose? Additionally, I have an EC2 instance running the splunk-otel-collector service, which successfully sends infrastructure metrics to the Splunk Observability Cloud. Can this service also facilitate sending logs to the Splunk Observability Cloud? According to the agent_config.yaml file provided bysplunk-otel-collector service, there are several pre-configured service settings related to logs, including logs/signalfx, logs/entities, and logs. These configurations utilize different exporters such as splunk_hec, splunk_hec/profiling, otlphttp/entities, and signalfx. Could you explain what each of these configurations is intended to do?   service: extensions: [health_check, http_forwarder, zpages, smartagent] pipelines: traces: receivers: [jaeger, otlp, zipkin] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [otlphttp, signalfx] # Use instead when sending to gateway #exporters: [otlp/gateway, signalfx] metrics: receivers: [hostmetrics, signalfx, statsd] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx, statsd] # Use instead when sending to gateway #exporters: [otlp/gateway] metrics/internal: receivers: [prometheus/internal] processors: [memory_limiter, batch, resourcedetection, resource/add_mode] # When sending to gateway, at least one metrics pipeline needs # to use signalfx exporter so host metadata gets emitted exporters: [signalfx] logs/signalfx: receivers: [signalfx, smartagent/processlist] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx] logs/entities: # Receivers are dynamically added if discovery mode is enabled receivers: [nop] processors: [memory_limiter, batch, resourcedetection] exporters: [otlphttp/entities] logs: receivers: [fluentforward, otlp] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [splunk_hec, splunk_hec/profiling] # Use instead when sending to gateway #exporters: [otlp/gateway]   Thanks!
Hi @zksvc , you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.2/Forwarding/Aboutforwardingandreceivingdata or https://docs.splunk.com/Documentation/Splunk/9.3.2... See more...
Hi @zksvc , you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.2/Forwarding/Aboutforwardingandreceivingdata or https://docs.splunk.com/Documentation/Splunk/9.3.2/Forwarding/Aboutforwardingandreceivingdata there are also many videos to explain this. in few words: enable Splunk to receive logs, install Unioversal Forwarder on the windows system install Splunk_TA_Windows on the Universal Forwarder enable inputs on the Splunk_TA_Windows Ciao. Giuseppe  
Hi Everyone,  I was create my own lab for learning to configure best practice for Windows.  Then i create 1 Windows VM and doing scan in local (127.0.0.1) to get any information like port or someth... See more...
Hi Everyone,  I was create my own lab for learning to configure best practice for Windows.  Then i create 1 Windows VM and doing scan in local (127.0.0.1) to get any information like port or something else. But unfortunately when it trigger i can't see anything like the result. Maybe i need to config something in my Windows or Something ? 
You can use the regex approach as @gcusello suggested, with a small modification:   | rex field=alert.alias "(?<field1>[^_]+(_[^_]+){2})_(?<field2>.+)"   Because the string is strictly formatted,... See more...
You can use the regex approach as @gcusello suggested, with a small modification:   | rex field=alert.alias "(?<field1>[^_]+(_[^_]+){2})_(?<field2>.+)"   Because the string is strictly formatted, you can also use split to achieve the same.  Depending on number of events you handle, the following could be more economical.   | eval elements = split('alert.alias', "_") | eval field1 = mvjoin(mvindex(elements, 0, 2), "_"), field2 = mvjoin(mvindex(elements, 2, -1), "_")   Here is an emulation:   | makeresults format=csv data="alert.alias STORE_8102_BOXONE_MX_8102 STORE_8102_BOXONE_MX_8102_01"   Either of the above searches gives alert.alias field1 field2 STORE_8102_BOXONE_MX_8102 STORE_8102_BOXONE BOXONE_MX_8102 STORE_8102_BOXONE_MX_8102_01 STORE_8102_BOXONE BOXONE_MX_8102_01
Sure.  startswith and endwith can also be sophisticated.   | rename "Log text" as LogText | transaction maxspan=120s startswith=eval(match(LogText, "\bdisconnected\b")) endswith=eval(match(LogText,... See more...
Sure.  startswith and endwith can also be sophisticated.   | rename "Log text" as LogText | transaction maxspan=120s startswith=eval(match(LogText, "\bdisconnected\b")) endswith=eval(match(LogText, "\bconnected\b")) keeporphans=true | where isnull(closed_txn)   Here is an emulation   | makeresults format=csv data="Row, _time, Log text 1, 7:00:00am, text connected\ntext 2, 7:30:50am, text\ndisconnected\n\ntext 3, 7:31:30am, text connected\ntext 4, 8:00:10am, text\ndisconnected\n\ntext 5, 8:10:30am, text\ndisconnected\n\ntext" | eval _time = strptime(_time, "%I:%M:%S%p"), "Log text" = replace('Log text', "\\\n", " ") | sort - _time ``` data emulation above ```   The above search gives LogText Row _raw _time closed_txn duration eventcount field_match_sum linecount text disconnected text 5 1 2024-12-18 08:10:30           text disconnected text 4 1 2024-12-18 08:00:10          
I don't think I understand what you're trying to do. $Beta status:result._statusNumber$ is a token set by your search, "Beta status", and therefore has no default value. The screenshot you've shown i... See more...
I don't think I understand what you're trying to do. $Beta status:result._statusNumber$ is a token set by your search, "Beta status", and therefore has no default value. The screenshot you've shown is for setting tokens when users click on a visualisation. The two things are not related, really, other than how they are used in source code.  What issue are you trying to solve, exactly? If the token isn't working, have you made sure you've checked the "Access search results or metadata" box in the data source config? 
@KyleMika Heya!  This works for me, at least as I understand what you are trying to do. This is the URL (anonymised) for a dashboard I'm using:  https://xxxxx.splunkcloud.com/en-US/app/<app_nam... See more...
@KyleMika Heya!  This works for me, at least as I understand what you are trying to do. This is the URL (anonymised) for a dashboard I'm using:  https://xxxxx.splunkcloud.com/en-US/app/<app_name>/<dashboard>?form.LHN=LHN&form.dd_qw5k3GKd=*&form.filter=age&form.l1=*&form.l2=*&form.l3=*&form.rm=*&form.tok_asset_name=*&form.tok_rm=CORE&form.tok_specialty=*&form.tok_tier=*&form.tok_test=nope The last input is in the canvas, the rest are above. All are successfully reproduced if I share it.  Can you share the URL that you are seeing/sharing with others? 
Heya Splunk Community folks, In an attempt to make a fairly large table in DS readable, I was messing around with fontSize, and I noted that the JSON parser in the code editor was telling me that p... See more...
Heya Splunk Community folks, In an attempt to make a fairly large table in DS readable, I was messing around with fontSize, and I noted that the JSON parser in the code editor was telling me that pattern: "^>.*" is valid for the property: options.fontSize. Is that actually enabled in DS, does anyone know? In other words, can I put a selector/formatting function in (for example, formatByType) and have the fontSize selected based on whether the column is a number or text type? If so, what's the syntax for the context definition? For example, is there a way to make this work? "fontSize": ">table | frameBySeriesTypes(\"number\",\"string\") | formatByType(fontPickerConfig)" (If not, there should be!) Thanks!
A data model is created with root search dataset and is set to acceleration as well. rootsearchquery1 : index=abc sourcetype=xyz field_1="1" rootsearchquery2 : index=abc sourcetype=xyz field_1="1"... See more...
A data model is created with root search dataset and is set to acceleration as well. rootsearchquery1 : index=abc sourcetype=xyz field_1="1" rootsearchquery2 : index=abc sourcetype=xyz field_1="1" | fields _time field_2 field_3 For both the queries, auto extracted fields are added. ( _time, field_2, field_3). These are general questions for better understanding,  I would like to get suggestions in which scenario which usage (tstas, datamodel, root event , root search with streaming command, root search without streaming command) is preferrable? 1. |datamodel datamodelname datasetname | stats count by field_3 For Query 1, the output is pretty fast just below 10 seconds. (root search with out streaming command) For Query 2, the output is more than 100 seconds. (root search with streaming command) 2. For Query 2, tstats command is also taking more than 100 seconds and only giving results when added summariesonly=false, why is it not giving results when summariesonly=true is added? For Query 1, it works both summariesonly=false and true , and the output is pretty fast less than 2 seconds actually. So, in what scenario is it mentioned that streaming commands in root search can be added and accerlated, when in return it is querying by adding fields twice which is becoming more inefficient? eg : This is for Query 2 | datamodel datamodelname datasetname | stats count by properties.ActionType underlying query that is running : (index=* OR index=_*) index=abc sourcetype="xyz" field_1="1" _time=* DIRECTIVES(READ_SUMMARY(datamodel="datamodelname.datasetname" summariesonly="false" allow_old_summaries="false")) | fields "_time" field_2 field_3 | search _time = * | fields "_time" field_2 field_3 | stats count by properties.ActionType 3. And in general what is recommended - when a datamodel is accerlated, using either of them | datamodel or | tstats gives better performance. - when a datamodel is not accerlated, using | tstats only gives better performance.  Is this correct? 4. And when a datamodel is not accerlated, the command | datamodel pulls the data from _raw buckets, then what is the use of querying the data using datamodel instead of direct index? When the performance is same? 5. And while querying | datamodel datamodelname datasetname why is splunk by default adding ( index=* and index=_*)? It can be changed?
Something like this? | makeresults format=csv data="hostname cn=192_168_1_1 cn=myhost otherhostnane" | rex field=hostname "cn=(?<ipAddr>\d{1,3}[._]\d{1,3}[._]\d{1,3}[._]\d{1,3})" | eval hostname=coa... See more...
Something like this? | makeresults format=csv data="hostname cn=192_168_1_1 cn=myhost otherhostnane" | rex field=hostname "cn=(?<ipAddr>\d{1,3}[._]\d{1,3}[._]\d{1,3}[._]\d{1,3})" | eval hostname=coalesce(replace(ipAddr, "_", "."), hostname)
@Travlin1 something like this?   | makeresults | eval cn=mvappend( "192_168_1_1", "10_0_0_5", "webserver-prod01", "172_16_32_1", "database.example.com", "192_168_0_badformat", "dev_server_01" ) | m... See more...
@Travlin1 something like this?   | makeresults | eval cn=mvappend( "192_168_1_1", "10_0_0_5", "webserver-prod01", "172_16_32_1", "database.example.com", "192_168_0_badformat", "dev_server_01" ) | mvexpand cn | eval converted_host=case( match(cn, "^\d+_\d+_\d+_\d+$"), replace(cn, "_", "."), true(), cn ) | eval host_type=case( match(cn, "^\d+_\d+_\d+_\d+$"), "ip_address", true(), "hostname" ) | table cn, converted_host, host_type         If this helps, Please Upvote.
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same i... See more...
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same index but each has it own sourcetype. File copy location events has logs for each file but Target location has a logs which has multiple files names. Log format of filecopy location: 2024-12-18 17:02:50 , file_name="XYZ.csv",  file copy success  2024-12-18 17:02:58, file_name="ABC.zip", file copy success  2024-12-18 17:03:38, file_name="123.docx", file copy success 2024-12-18 18:06:19, file_name="143.docx", file copy success Log format of Target Location: 2024-12-18 17:30:10 <FileTransfer status="success>                                               <FileName>XYZ.csv</FileName>                                              <FileName>ABC.zip</FileName>                                              <FileName>123.docx</FileName>                                                </FileTransfer> Desired result:       File Name                  FileCopyLocation               Target Location       XYZ.csv                  2024-12-18 17:02:50          2024-12-18 17:30:10       ABC.zip                   2024-12-18 17:02:58          2024-12-18 17:30:10       123.docx                2024-12-18 17:03:38          2024-12-18 17:30:10        143.docx               2024-12-18 18:06:19            Pending   Since events are in the same index and more events I do not  want use join.
Hello everyone! I most likely could solve this problem if given enough time, but always seem to never have enough .  Within Enterprise security we pull asset information via LDAPsearch into our ES... See more...
Hello everyone! I most likely could solve this problem if given enough time, but always seem to never have enough .  Within Enterprise security we pull asset information via LDAPsearch into our ES instance hosted in Splunk Cloud. Within the cn=* field, multiplies for both IP and hostnames. We aim for host fields to be either hostname or nt_host. some of these values though are written as such: cn=192_168_1_1   I want to evaluate the existing field and output them as normal decimals when seen. I am assuming I would need an if statement keeping intact hostname values while else performing the conversion. I am not at computer right now but will update with some data and my progress thus far.   Thanks!  
This fits into the "proving a negative" where you're trying to find things that are NOT reporting This is the general way to do that index=firewalls OR index=alerts AND host="*dmz-f*" | rex field=h... See more...
This fits into the "proving a negative" where you're trying to find things that are NOT reporting This is the general way to do that index=firewalls OR index=alerts AND host="*dmz-f*" | rex field=host "(?<hostname_code>[a-z]+\-[a-z0-9]+)\-(?<device>[a-z]+\-[a-z0-9]+)" | lookup device_sites_master_hostname_mapping.csv hostname_code OUTPUT Workday_Location | stats dc(host) as hosts_reporting by Workday_Location | append [ | inputlookup fw_asset_lookup.csv where ComponentCategory="Firewall*" | stats count as expected_hosts by WorkDay_Location ] | stats values(*) as * by WorkDay_Location | eval diff=expected_hosts - hosts_reporting so you do your basic search to count the reporting hosts and then add on the list of hosts you expect to see and then join them together with the last stats and then calc the difference
You might double-check that but if I remember correctly csv lookups do linear search through contents so in a pessimistic case you'll be doing a million comparisons per each input row only to return ... See more...
You might double-check that but if I remember correctly csv lookups do linear search through contents so in a pessimistic case you'll be doing a million comparisons per each input row only to return a negative match. It has nothing to do with fuzziness.