All Posts

Top

All Posts

Have you compared emulation with real data?  Also, really get rid of that table command which can be in the way. (You can add some formatting after you verify that outputs are satisfactory.)  Is ther... See more...
Have you compared emulation with real data?  Also, really get rid of that table command which can be in the way. (You can add some formatting after you verify that outputs are satisfactory.)  Is there some real data that you can share? (Anonymize as needed but take care to preserve precise structure.)  Using emulation, the output is not zero.  Clearly, actual data is different from what you posted above. Run this:   | makeresults | eval _raw = "{\"date\": \"1/2/2022 00:12:22,124\", \"DATA\": \"[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success\", \"tags\": {\"host\": \"GTU5656\", \"insuranceid\": \"8786578896667\", \"lib\": \"app\"}}" | spath | eval _time = strptime(date, "%d/%m/%Y %H:%M:%S,%f") ``` the above emulates index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") ``` | rex field=DATA mode=sed "s/ *[\|}\]]/\"/g s/: *\[*/=\"/g" | rename DATA AS _raw | kv |search ACTION= start OR ACTION=done NOT SERVICE="null" |eval split=SERVICE.":".ACTION |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") | table _time *START *DONE   Do you get the same results as I did in the previous comment? (I do not encourage use of screenshot to show search or results, but I had already shared them in text previously. So, here you go for a screenshot.)
Hello, good day team! How are you? I did the download and instalation for this app but I can't found the "TA genesys cloud", where can I download it? The TA lives in another repository? Please, co... See more...
Hello, good day team! How are you? I did the download and instalation for this app but I can't found the "TA genesys cloud", where can I download it? The TA lives in another repository? Please, could you help me to get this TA please? If currently the TA doesn't lives in the Splunkbase, could you send me the TA via email please? Regards in advance! Carlos Martínez. carloshugo.martinez@edenred.com Edenred.
What happens if you use the v2 jobs endpoint? (the non-v2 one is deprecated, as per https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch) Instead of: url = "https://abc.splunkcloud... See more...
What happens if you use the v2 jobs endpoint? (the non-v2 one is deprecated, as per https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch) Instead of: url = "https://abc.splunkcloud.com:8089/servicesAB/-/xyz/search/jobs" Try: url = "https://abc.splunkcloud.com:8089/servicesAB/-/xyz/search/v2/jobs"  
From searching the Postman public collections, there seems to be a Splunk Postman collection, built using the public tutorialdata.zip file. https://www.postman.com/njrusmc/workspace/public-collection... See more...
From searching the Postman public collections, there seems to be a Splunk Postman collection, built using the public tutorialdata.zip file. https://www.postman.com/njrusmc/workspace/public-collections/collection/14123647-1408e0a3-c0bb-4e08-83b1-8f83fdc8f1c0?tab=overview
Unless I am mistaken, the multiselect input should provide the checkbox functionality you desire. Does multiselect work for your use case?
These lookups get their information from configured asset lookups within Enterprise Security, as you linked. They're populated automatically  Do your asset lookups have CIDR information included in t... See more...
These lookups get their information from configured asset lookups within Enterprise Security, as you linked. They're populated automatically  Do your asset lookups have CIDR information included in them? If the string lookup is populating, then you have some kind of assets configured. If you have a dev environment experiencing this problem, you might try enabling the demo_asset_lookup in the Asset and Identity Management page to see if it populates the CIDR one automatically. It has CIDR networks properly built into it. The best official documentation I came across in my search was https://docs.splunk.com/Documentation/ES/7.3.1/Admin/Howassetandidentitydataprocessed
According to the docs for searchbnf.conf, setting the shortdesc field to a value will make it so that the description field only appears when the search assistant is set to "Full" mode. https://docs... See more...
According to the docs for searchbnf.conf, setting the shortdesc field to a value will make it so that the description field only appears when the search assistant is set to "Full" mode. https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Searchbnfconf If you would like description to always appear in that window, and it is not too long, then you could comment out the shortdesc field. Or you could set your user preference so that the Search assistant is in "Full" mode. (User "Preferences"->"SPL Editor"->"Search assistant". It defaults to "Compact")
If the KVStore initialization has failed, you'll definitely have a lot of issues. You'll need to figure out why that is by looking through $SPLUNK_HOME/var/log/splunk/mongod.log to see what's going o... See more...
If the KVStore initialization has failed, you'll definitely have a lot of issues. You'll need to figure out why that is by looking through $SPLUNK_HOME/var/log/splunk/mongod.log to see what's going on there and address it first.
Hi Splunkers, I have a problem with a Per-Event Index Routing use case. In involved environment, there are some data currently collected in a index named ot. Here we have some logs that must be spl... See more...
Hi Splunkers, I have a problem with a Per-Event Index Routing use case. In involved environment, there are some data currently collected in a index named ot. Here we have some logs that must be splitted and redirect to other indexes, with naming convention ot_<tecnology>. Inputs.conf involved file is placed under a dedicated app, named simply customer_inputs. The procedure to use is very clear for us: we created, inside above app, props.conf and transforms.conf and worked with key and regex. The strange behavior is this: if we work to redirect one kind of logs, it works perfectly. When we add another log subset, nothing works properly. Let me share you an example.  Scenario 1 In this case, we want: Windows logs must go on ot_windows index. All remaining logs still must go to ot index. We can identify involved logs based on ports; they are coming as network input on port 514 udp, with CEF format. First, our props.conf   [source::udp:514] TRANSFORMS-ot_windows = windows_logs   Second, our transofrms.conf   [windows_logs] SOURCE_KEY = _raw REGEX = <our_regex> DEST_KEY = _MetaData:Index FORMAT = ot_windows   This configuration works fine: Windows logs goes in ot_windows index, all remaining ones still go on ot index. Then, we try another configuration, explained on second scenario. Scenario 2 In this case, we want: Nozomi logs must go on ot_nozomi index. All remaining logs still must go to ot index. Again, we can identify involved logs based on ports; they are coming as network input on port 514 udp, with CEF format. First, our props.conf   [source::udp:514] TRANSFORMS-ot_nozomi = nozomi_logs   Second, out transforms.conf   [nozomi_logs] SOURCE_KEY = _raw REGEX = <our_second_regex> DEST_KEY = _MetaData:Index FORMAT = ot_nozomi   Again, this conf works fine: all Nozomi logs go on dedicated index, ot_nozomi, while all remaining one still go on ot index.  ISSUE So, if we set one of above conf, we got expected behavior. By the way, when we try to merge above confs, nothing works: logs, both Windows and Nozomi, continue to go on ot index. Due they work fine when they are "single", we suspect error is not on regex used, but on how we perform merge. Currently, our merged conf files looks like this: props.conf   [source::udp:514] TRANSFORMS-ot_windows = windows_logs TRANSFORMS-ot_nozomi = nozomi_logs   transforms.conf   [windows_logs] SOURCE_KEY = _raw REGEX = <our_regex> DEST_KEY = _MetaData:Index FORMAT = ot_windows [nozomi_logs] SOURCE_KEY = _raw REGEX = <our_second_regex> DEST_KEY = _MetaData:Index FORMAT = ot_nozomi   Is our assumption right? If yes, what is the correct merge structure?
I'd recommend seeing if your home router/firewall can stream syslog and setting up a forwarder to ingest those logs regularly. Install a forwarder on your PC and monitor your windows/linux/whatever y... See more...
I'd recommend seeing if your home router/firewall can stream syslog and setting up a forwarder to ingest those logs regularly. Install a forwarder on your PC and monitor your windows/linux/whatever you use logs that way as well. I think that would be a good start for messing around with data, at least. Also, check out https://www.splunk.com/en_us/training/course-catalog.html?sort=Newest&filters=filterGroup1FreeCourses if you haven't. There's lots of good content there for beginners.
A good way to give a service account access to Splunk is to use Authentication Tokens. Ref: https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/UseAuthTokens Is there anything specific you w... See more...
A good way to give a service account access to Splunk is to use Authentication Tokens. Ref: https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/UseAuthTokens Is there anything specific you would like the service account to do?
I don't believe it is enabled for Cloud Trials: https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/RESTTUT/RESTandCloud
I haven't tried this yet but it sounds very helpful.  I'm going to try it later this afternoon. Thank you!  
A better way to do this is to use the "Run a Script" alert action (after you create a script to do the copy).  Yes, this alert action is deprecated but I use it often and there is no way that Splunk ... See more...
A better way to do this is to use the "Run a Script" alert action (after you create a script to do the copy).  Yes, this alert action is deprecated but I use it often and there is no way that Splunk will be removing it from the product.
The base instructions here should be all you need to follow for that one. Review the rest of the 'planning' and 'securing' sections of that documentation to see any additional details you might be cu... See more...
The base instructions here should be all you need to follow for that one. Review the rest of the 'planning' and 'securing' sections of that documentation to see any additional details you might be curious about, though.
Here are a few questions from the session (get the full Q&A deck and live recording in the #office-hours Slack channel): Q1: How do I configure the yaml for APM including any performance best practi... See more...
Here are a few questions from the session (get the full Q&A deck and live recording in the #office-hours Slack channel): Q1: How do I configure the yaml for APM including any performance best practices? OpenTelemetry Collector configuration is stored in a YAML file and specifies the behavior of receivers, processors, exporters, and other components.  The Splunk Distribution of the OpenTelemetry Collector comes with a default configuration that can be modified as needed for your environment.  It can be helpful to visualize collector configuration using a tool such as OTelBin.  Documentation: Configure the Collector for Kubernetes with Helm  Collector for Linux default configuration Collector for Windows default configuration  OTelBin Q2: Can we generate a component call flow? Splunk automatically generates a service map showing the dependencies and call flow among your instrumented and inferred services in APM. The map is dynamically updated based on your selections in the time range, environment, workflow, service, and tag filters. You can use the service map to identify dependencies, performance bottlenecks, and error propagation. Documentation:  View dependencies among your services in the service map Q3: Can you cover AlwaysOn Profiling for .Net? To activate AlwaysOn Profiling, set the SPLUNK_PROFILER_ENABLED environment variable to true. To activate memory profiling, set the SPLUNK_PROFILER_MEMORY_ENABLED environment variable to true after activating AlwaysOn Profiling. Documentation:  Activate AlwaysOn Profiling (.NET) Q4: How can we get SQS to integrate into a trace for APMs? For some languages, this is included with auto-instrumentation.  For example, OpenTelemetry Java instrumentation includes support out-of-the-box for the AWS SDK, which automatically instruments calls to SQS.  You can search the OpenTelemetry registry to find details regarding SQS support with other languages.  Documentation:  https://github.com/open-telemetry/opentelemetry-java-instrumentation/tree/main/instrumentation/aws-sdk  https://opentelemetry.io/ecosystem/registry/?s=sqs&component=&language= Q6: Can you touch on API performance and availability? Create availability SLOs in o11y Cloud  Documentation:  https://docs.splunk.com/observability/en/alerts-detectors-notifications/slo/create-slo.html#measure-and-track-your-service-health-metrics-with-service-level-objectives-slos Live Questions: Q1: How do I know which attributes to use for breaking down the service map?  Blog: Up Your Observability Game with Attributes Docs: Instrument your application code to add tags to spans Docs: Filter spans within a particular trace   Q2: Do we need to set both Profiling variables for auto instrumentation as well ?  Yes.  To enable AlwaysOn Profiling with Zero config auto-instrumentation you can add the following parameters to enable CPU and memory profiling:  --enable-profiler --enable-profiler-memory Docs:  Activate AlwaysOn Profiling  Q3: Can I create customized variables and filters for APM Dashboards?  Documentation: Customize Dashboard Filters
The Forwarder Management screen applies only to Deployment Server (DS) instances.  A DS is a Splunk instance type that ensures each forwarder has the configuration (apps) it needs.  DSs are optional ... See more...
The Forwarder Management screen applies only to Deployment Server (DS) instances.  A DS is a Splunk instance type that ensures each forwarder has the configuration (apps) it needs.  DSs are optional and are unnecessary when you only have a single forwarder. When you installed the forwarder, did you configure it to forward data to the server?  If so, then you should be seeing data from the forwarder.  Verify that by searching for index=_internal host=f1 Make sure that returns results for continuing further. The next step is telling the forwarder what you want it to forward.  By default, it only sends its own logs.  Install the Splunk Add-on for Windows (https://splunkbase.splunk.com/app/742) on the forwarder and turn on (set disabled=0) the inputs you desire.  Be sure to restart the forwarder after changing inputs.conf settings.  
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
The current query can't do that because it only looks at failed logins.  It will never see a successful login. The solution will entail appending a tstats command that counts successes and then modi... See more...
The current query can't do that because it only looks at failed logins.  It will never see a successful login. The solution will entail appending a tstats command that counts successes and then modifying the where command to look for 6 or more failures and at least 1 success. You can find an example in the Basic Brute Force Detection use case in the Splunk Security Essentials apps.
I installed spunk enterprise on a server named s1.  I installed a forwarder on server f1. Both Windows Server 2019. When I go into Forwarder Management, s1 sees f1, but I can't DO anything with it.... See more...
I installed spunk enterprise on a server named s1.  I installed a forwarder on server f1. Both Windows Server 2019. When I go into Forwarder Management, s1 sees f1, but I can't DO anything with it.  There's nothing on the Forwarder Management screen to CONFIGURE.   If I go to Settings | Data Inputs and try to configure "Remote Performance monitoring" (just as a test, just to monitor something), it says it's going to use WMI and that I should use a forwarder instead. Yes, please.  I want to use a forwarder instead.  I want to user my new forwarder, but I just don't see how.