All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @shimada-k , please try this: index=your_index ("tags.next-hop-group"=* OR "tags.index"=*) | rename "tags.next-hop-group" AS tags_next_hop_group "tags.index" AS tags_index "ipv4-... See more...
Hi @shimada-k , please try this: index=your_index ("tags.next-hop-group"=* OR "tags.index"=*) | rename "tags.next-hop-group" AS tags_next_hop_group "tags.index" AS tags_index "ipv4-entry_prefix" AS ipv4_entry_prefix "network-instance_name" AS network_instance_name | eva tags_index=coalesce(tags_index, tags_next_hop_group) | stats vaues(ipv4_entry_prefix) AS ipv4_entry_prefix values(network_instance_name) AS network_instance_name values(interface) AS interface BY tags_next_hop_group in other words, you have to coalesce events with the fields "tags.next-hop-group" and "tags.index" and use it as key in a stats command. I had to rename your fields because sometimes eval and stats commands doesn't correctly work when inside the field there are spaces, dots or minus char. Ciao. Giuseppe
Thank you for your answer. All these points have already been kind of analyzed and taken into account. I was expecting more insights / inputs from experience and challenges that splunkers may hav... See more...
Thank you for your answer. All these points have already been kind of analyzed and taken into account. I was expecting more insights / inputs from experience and challenges that splunkers may have faced. However I do know as well that our setup is a bit typical and does not reflect most of the enterprise setups that others may work with.
Hi Experts, I would like to create the following table from the three events.    ipv4-entry_prefix network-instance_name interface ----------------------------------------------... See more...
Hi Experts, I would like to create the following table from the three events.    ipv4-entry_prefix network-instance_name interface ---------------------------------------------------------------------- 1.1.1.0/24 VRF_1001 Ethernet48   Both event#1 and event#2 have "tags.next-hop-group" field and both event#2 and event#3 have "tags.index" field.All events are stored in the same index. I tried to write a proper SPL to achieve the above, but I couldn't. Could you please tell me how to achieve this?   - event#1 { "name": "fib", "timestamp": 1717571778600, "tags": { "ipv4-entry_prefix": "1.1.1.0/24", "network-instance_name": "VRF_1001", "next-hop-group": "1297036705567609741", "source": "r0", "subscription-name": "fib" } } - event#2 { "name": "fib", "timestamp": 1717572745136, "tags": { "index": "140400192798928", "network-instance_name": "VRF_1001", "next-hop-group": "1297036705567609741", "source": "r0", "subscription-name": "fib" }, "values": { "index": "140400192798928" } } -event#3 { "name": "fib", "timestamp": 1717572818890, "tags": { "index": "140400192798928", "network-instance_name": "VRF_1001", "source": "r0", "subscription-name": "fib" }, "values": { "interface": "Ethernet48" }   Many thanks, Kenji
Hi  I think it works for you: index=foo message="magic string" | eventstats p99(duration) as p99val | stats count(eval(duration > p99val)) as count
i got error like  There was an error processing the upload.Error during app install: failed to extract app from C:\Windows\TEMP\tmp5kgytoy5 to C:\Program Files\Splunk\var\run\splunk\bundle_tmp\2d60... See more...
i got error like  There was an error processing the upload.Error during app install: failed to extract app from C:\Windows\TEMP\tmp5kgytoy5 to C:\Program Files\Splunk\var\run\splunk\bundle_tmp\2d60c8764b856899: The system cannot find the path specified.
I have something that I think works, but I don't know how (in)efficient it is:     index=foo message="magic string" | eventstats p99(duration) as p99val | where duration > p99val | stats count as ... See more...
I have something that I think works, but I don't know how (in)efficient it is:     index=foo message="magic string" | eventstats p99(duration) as p99val | where duration > p99val | stats count as "# of Events with Duration > p99"     It seems to take a long time to complete as soon as I add in the "| stats count" bit.  Simply getting events seems pretty quick. Is this a good approach and/or how can I improve it?
Thanks for the reply,   Is it normal to see a sourcetype configured for 3 sources where they are files on a linux box, they're got the same timestamps pattern but the logs are different and the lin... See more...
Thanks for the reply,   Is it normal to see a sourcetype configured for 3 sources where they are files on a linux box, they're got the same timestamps pattern but the logs are different and the lines count is different too
Rather than make a datamodel to fit your data, make the data fit existing datamodels.  Use field aliases and evals to map the proprietary log fields to DM fields.  Not all fields in a datamodel have ... See more...
Rather than make a datamodel to fit your data, make the data fit existing datamodels.  Use field aliases and evals to map the proprietary log fields to DM fields.  Not all fields in a datamodel have to be populated so don't worry if you can't get all of them. Events with different line counts are normal and is not really considered a different format.  For example, an event may contain a traceback, which can have an unpredictable number of lines.  Events within the same source that have different formats (like the timestamp is in a different place) are another matter.  A given log file really should contain a single format (sourcetype, in Splunk terms) for simpler processing.
As I said, you can ingest such data, but if you have a 10Mb file with a single line of text, which would constitute a single event, you would have to make sure that the max line length limits are twe... See more...
As I said, you can ingest such data, but if you have a 10Mb file with a single line of text, which would constitute a single event, you would have to make sure that the max line length limits are tweaked. Having said that, I am not sure how Splunk or the browser would handle a 10Mb single event. Still, the answer really is that you _can_ ingest the data, but whether it will ultimately be a good fit for your purpose cannot easily be known, for example, does geographic data mean descriptions of landscape features or geological attributes and you are looking to discover what type of rock may have gold in it, or are you looking to get topographical information from coordinates and elevation data? Really, Splunk's good at taking multiple pieces of data and performing aggregations and correlations with that data.
Addendum: No solution found yet
Hi all, I've got a customer with proprietary logs in their environment and they would like it to be CIM mapped to a data model. The problem is that the logs don't fit any of the data models pre-confi... See more...
Hi all, I've got a customer with proprietary logs in their environment and they would like it to be CIM mapped to a data model. The problem is that the logs don't fit any of the data models pre-configured for the CIM Mapping add-on, so I assume I will have to create a custom one that fits with their environment   Problem is, I have never done this before so would need some advice on how to tackle this. One thing that confuses me about their environment is that their custom logs can have different formats for 1 source, this means that 1 event might produce a log with 32 lines, another with 12 lines etc   How would I deal with this?
Hello Splunkers,  Please I would like to know if it is possible, at indexer layer, given a HEC input source, to route some incoming data (of course based on regex) to a UDP destination without index... See more...
Hello Splunkers,  Please I would like to know if it is possible, at indexer layer, given a HEC input source, to route some incoming data (of course based on regex) to a UDP destination without indexing that data. Let's say: sourcetype=hecinput, if it contains word "DEBUG" it should go to UDP destination, all the rest should be indexed as usual. I know that maybe I could use INGEST_EVAL, but I think it supports only _TCP_ROUTING.  Thanks in advance. 
Hello, We are attempting to use Splunk Cloud as a multi-tenant environment (one company, separate entities) in a single subscription. We have in index design that isolates data to each tenant and a... See more...
Hello, We are attempting to use Splunk Cloud as a multi-tenant environment (one company, separate entities) in a single subscription. We have in index design that isolates data to each tenant and aligns with RBAC assignments. That gets us index-level isolation for data sources that are specific to each tenant. We also stamp all non-splunk events with a tenant code so that role-based access restrictions can be used to filter returned data down to only that which matches your assigned code. This approach allows for event-level filtering from indexes where data is for ALL tenants, such as lastchanceindex. That last set of logs we need to control access to are the underscore indexes. These events are collected based on the inputs.conf files that deploy with the HF and UF agents, which do NOT have our tenant codes associated with them. I was looking for any feedback from the community as to what the downside might be to copying ../etc/system/default/inputs.conf into ../etc/system/local/inputs.conf and adding "meta = tenantcode::<your_tenant_code_goes_here_without_the_angled_brackets>" to each stanza. At that point, all SPLUNK events in the underscore indexes would also contain tenant codes and then we'd be able to achieve item-level filtering at that point. Thanks in advance for any feedback and opinions!
Thanks to clarify his point with the file .conf, but I am using the setup page from the beginning and when I run something to test like: | chatgpt org="org-XXXXXXXXXXXXXX" prompt="what means hello... See more...
Thanks to clarify his point with the file .conf, but I am using the setup page from the beginning and when I run something to test like: | chatgpt org="org-XXXXXXXXXXXXXX" prompt="what means hello mundo?" model="gpt-4o" I get: ERROR HTTP 404 Not Found -- Could not find object id=TA-openai-api:org_id_XXXXXXXXXXXXXXXX: ERROR cannot unpack non-iterable NoneType object   I copy paste org_id from my company openAI account. I also try two keys one being the owner and other creating a service and in any of them are working
Then you don't need to edit passwords.conf, just use the setup page.
Below is the query which included all the events for windows shutdown and starting up  want to exclude host when event 4608 is observed within 5 minutes index =windows product=Windows EventC... See more...
Below is the query which included all the events for windows shutdown and starting up  want to exclude host when event 4608 is observed within 5 minutes index =windows product=Windows EventCode="4609" OR EventCode="4608" OR EventCode="6008" | table _time name host dvc EventCode severity Message please share the query. Thanks
No, I directly install de application. Should I uninstall de app, install de previous version and upgrade to the current version?
I posted a workaround if you're still having this issue.
After setting this aside until we finally upgraded splunk, a solution has been found. Working with splunk support for weeks, we were not able to fix it directly and concluded that the errors are due ... See more...
After setting this aside until we finally upgraded splunk, a solution has been found. Working with splunk support for weeks, we were not able to fix it directly and concluded that the errors are due to splunk trying to read the files before they are done being written by our diode software. The files are transferred once every 24 hours, so I created a script run by a scheduled task that copies the files to a different set of folders and set up batch inputs to read then delete the copies. All logs come through without any extra junk.  Thanks for your help! @yeahnah  @isoutamo 
Hello everyone, I am using the Machine Agent without the bundle version 22.6 as indicated in past releases in docs. When I run java -jar machineagent.jar, I get the error: Could not find the main ... See more...
Hello everyone, I am using the Machine Agent without the bundle version 22.6 as indicated in past releases in docs. When I run java -jar machineagent.jar, I get the error: Could not find the main class: com.appdynamics.agent.sim.bootstrap.Bootstrap. Program will exit. I ran the command jar tf machineagent.jar | grep com/appdynamics/agent/sim/bootstrap/Bootstrap.class, and it shows up, so I don't understand what I'm doing wrong. Can you help me? (doc: https://docs.appdynamics.com/appd/24.x/latest/en/product-and-release-announcements/past-releases/past-agent-releases#id-.PastAgentReleasesv24.2-Version22.6.0-June29,2022)