All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Does anyone know how to do this on Splunk v8.0.5?
I have an existing search head that is peered to 2 cluster mgrs. This SH has the ES app on it. I am looking to add additional data from a remote indexers. Do i just need to add the remote cluster mgr... See more...
I have an existing search head that is peered to 2 cluster mgrs. This SH has the ES app on it. I am looking to add additional data from a remote indexers. Do i just need to add the remote cluster mgr as a peer to my existing SH so that i can access the data in ES?
I know this is a while ago now, but maybe helpful to others...try using the "hidden" dimension `_timeseries`.  This is a JSON string that is an amalgamation of all of the dimensions for each datapoin... See more...
I know this is a while ago now, but maybe helpful to others...try using the "hidden" dimension `_timeseries`.  This is a JSON string that is an amalgamation of all of the dimensions for each datapoint. Take care, the results may be (very) high arity and splunkd doesn't (yet?) have very strong protections for itself (in terms of RAM used while searching) when using this code path, so it is (IMHO) easy to crush your indexer tier's memory and cause lots of thrashing.  
I tried what you suggested, but did not seem to help. It seemed as if the fix_subsecond stanza wouldn't be executed at all. The _h KV pair followed _ts's value without a whitespace. After experiment... See more...
I tried what you suggested, but did not seem to help. It seemed as if the fix_subsecond stanza wouldn't be executed at all. The _h KV pair followed _ts's value without a whitespace. After experimenting a bit more, I now have this, but this doesn't work either: [md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 $0 DEST_KEY = time_temp [md_subsecond] SOURCE_KEY = _meta REGEX = _subsecond::(\.\d+) FORMAT = $1 DEST_KEY = subsecond_temp [md_fix_subsecond] INGEST_EVAL = _raw=if(isnull(subsecond_temp),time_temp+" "+_raw,time_temp+subsecond_temp+" "+_raw) Plus props.conf: [default] ADD_EXTRA_TIME_FIELDS = none ANNOTATE_PUNCT = false SHOULD_LINEMERGE = false TRANSFORMS-zza-syslog = syslog_canforward, reformat_metadata, md_add_separator, md_source, md_sourcetype, md_index, md_host, md_subsecond, md_time, md_fix_subsecond, discard_empty_msg # The following applies for TCP destinations where the IETF frame is required TRANSFORMS-zzz-syslog = syslog_octet_count, octet_count_prepend # Comment out the above and uncomment the following for udp #TRANSFORMS-zzz-syslog-udp = syslog_octet_count, octet_count_prepend, discard_empty_msg [audittrail] # We can't transform this source type its protected TRANSFORMS-zza-syslog = TRANSFORMS-zzz-syslog = However this now breaks logging and I'm getting no logs forwarded to sylsog-ng. The connection is up, but no meaningful data arrives, just "empty" packages. What may be the problem? Did I break the sequence of the stanzas? (I did not seem to understand it in the first place, as they seem to be in backward order compared to how the KV pairs follow each other in the actual log message.)
Hi @Osama.Abbas, Thanks for asking your question on the community. I have shared this information with the Docs team with a ticket. I will post a reply when I've heard back from them. 
Hi @Stephen.Knott, Did you see the most recent reply from Michael?
I'm not sure what you mean by "settings" but since your AIO had all the indexed data and you've spun up new empty indexers that's logical that your SH will search the empty indexers. The proper way ... See more...
I'm not sure what you mean by "settings" but since your AIO had all the indexed data and you've spun up new empty indexers that's logical that your SH will search the empty indexers. The proper way to expand from a single AIO server is either as @isoutamo wrote (which is a bit more complicated to do as a single migration. or the other way: 1) Add another host as search head, migrate search-time settings there. Leave your old server as indexer. Verify if everything is working properly. 2) Add a CM, add your indexer as a peer to the CM. You might either set RF=SF=1 for starters and then raise it later when you add another peer or you can add another indexer at this step. The trick here is that your already indexed data is not clustered and while it should be searchable it will not get replicated.
If this a production or anything else than your lab environment, then you should configure TLS into use on those connections. There are instructions on securing your splunk environment guide and also ... See more...
If this a production or anything else than your lab environment, then you should configure TLS into use on those connections. There are instructions on securing your splunk environment guide and also there is conf23 presentation about TLS slippery or something similar.
If I recall right you must “enable” REST api first with support ticket or using ACS by enabling some networks to search-api. https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCl... See more...
If I recall right you must “enable” REST api first with support ticket or using ACS by enabling some networks to search-api. https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud After that you can query via rest endpoints.
When your starting point is AIO and you want to go single sh + indexer cluster and you want to keep your old data then the steps in high level are  install cm and configure it  add current AIO nod... See more...
When your starting point is AIO and you want to go single sh + indexer cluster and you want to keep your old data then the steps in high level are  install cm and configure it  add current AIO node as 1st peer add 2nd peer add a new SH copy needed apps etc from AIO into new SH Please check the exact steps and how to do those from @gcusello ‘s pointed document. There are detailed instructions how configure your CM, how to add peers, when and how to copy apps, when to remove unnecessary apps from old AIO node before use it as search peer etc.
No you cannot use indexed_extractions or kv_mode as the event is not json, only a part of the event is.  The way I have gone about this, is to extract the json bit automatically using props/transfor... See more...
No you cannot use indexed_extractions or kv_mode as the event is not json, only a part of the event is.  The way I have gone about this, is to extract the json bit automatically using props/transforms, so the json bit ends up in its own field, then can be worked on.  Otherwise I would look at if you really need to extract all the json, or just extract known important field values by extracting their key value pairs with regex, or even look at using ingest eval to extract the non json bits to fields, then dump them, then only keep the json..but it really all depends on the end user and their requirements/use case Either way this is a custom data onboarding...will require some work to get your use case done.. I usually ask why the sender is not using properly formatted json events to begin with...or if they can just insert kv pairs like "foo=bar" instead of the useless json blob thrown into a nonstructured event ...shoving json into a non json event is not really the flex people think it is....but hey thats what devs do these days...either way cleaning up log formats can be hard so may have to just find a way that works for this end user.  I know the pain I have to deal with this in OTel Collector events like this:   2025-01-09T20:29:14.015Z info internal/retry_sender.go:126 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "splunk_hec/platform_logs", "error": "Post \"https://http-inputs.foo.splunkcloud.com/services/collector\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)", "interval": "5.105973834s"}   AFAIK there is no way you can deal with this so that a user doesnt have to spath the field, unless you hide it in a dashboard or fundamentally change the format of the event thats indexed. 
Hello everyone, I'm trying to collect data in JSON format from Splunk Cloud, and I understand that one of the options is using the REST API. However, I'm not sure which endpoint I should use, or if ... See more...
Hello everyone, I'm trying to collect data in JSON format from Splunk Cloud, and I understand that one of the options is using the REST API. However, I'm not sure which endpoint I should use, or if there's another recommended way to achieve this directly from Splunk Cloud. I've been testing with the following endpoints: /services/search/jobs/ /servicesNS/admin/search/search/jobs But in both cases, I only get a 404 error indicating that the URL is not valid. Could you guide me on how to configure data collection in this format? What would be the correct endpoint? Which key parameters should I include in my request? Or, if there's an easier or more direct method, I'd appreciate it if you could explain. The version of Splunk I'm using is 9.3.2408.104. Thank you in advance for your help!
Hi @Karthikeya , you have to add this option to the stanza in props.conf where your sourcetype is defined. Then you have to add this props.conf to the add-on containing the inputs.conf and to the S... See more...
Hi @Karthikeya , you have to add this option to the stanza in props.conf where your sourcetype is defined. Then you have to add this props.conf to the add-on containing the inputs.conf and to the Search Head. Ciao. Giuseppe
Hi @kamlesh_vaghela   "I followed your previous instructions but encountered an error in my console, which is consistent with the issue in my primary use case. I suspect the problem lies in the plac... See more...
Hi @kamlesh_vaghela   "I followed your previous instructions but encountered an error in my console, which is consistent with the issue in my primary use case. I suspect the problem lies in the placement of my JavaScript file. Currently, the directory structure is as follows: Python script: /opt/splunk/etc/apps/search/bin JavaScript file: /opt/splunk/etc/apps/search/appserver/static Could you please help me identify if this directory setup might be causing the issue?"
@mattymo this is how my splunk events looks like: <12>Nov 12 20:15:12 localhost whatever: data={"a":"b","c":"d"} the rest are json fields.. As of now we are giving spath command in search which is ... See more...
@mattymo this is how my splunk events looks like: <12>Nov 12 20:15:12 localhost whatever: data={"a":"b","c":"d"} the rest are json fields.. As of now we are giving spath command in search which is not acceptable by customer. They want this json data fields to be extracted automatically once the on-boarding is done. Can I give indexed_extractions=json or kv_mode=json to achieve this? I am not sure where to give these settings? Iif i can achieve my requirement through this, please guide me through the steps atleast.
Hi Karthikeya! Are you parsing JSON out of a non-JSON payload? what would a sample event look like, are they not JSON to begin with? Do you need the rest of the event in splunk? or just the JSON ... See more...
Hi Karthikeya! Are you parsing JSON out of a non-JSON payload? what would a sample event look like, are they not JSON to begin with? Do you need the rest of the event in splunk? or just the JSON part? The short answer is once you prove your extraction works for all your events in search,  then you can move the regex parsing to the "props and transforms" configuration so you dont need to run it every time someone searches that sourcetype.  It is not possible to give you every step as it depends on your data and outcomes and environment, but from what you simply shared see this documentation - https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/Knowledge/Createandmaintainsearch-timefieldextractionsthroughconfigurationfiles  
I've seen that, but I don't see in it the right way to move between topologies.
Sorry for not getting terms right. So I started with an AIO. I added a Cluster Manager and Two Indexes. I connected the AIO to this as the Search Head. In that process I lost all of the settings and... See more...
Sorry for not getting terms right. So I started with an AIO. I added a Cluster Manager and Two Indexes. I connected the AIO to this as the Search Head. In that process I lost all of the settings and data that were in the AIO.
Hi @mattymo , Here is the question link - https://community.splunk.com/t5/Getting-Data-In/Query-to-be-auto-applied/m-p/708893.. Please help me out there.
Hi all- I've seen older posts for this topic but nothing in past couple years so here goes.  Is there a way to export the application interactions/dependencies seen on the Flow Map?  e.g. Tier A cal... See more...
Hi all- I've seen older posts for this topic but nothing in past couple years so here goes.  Is there a way to export the application interactions/dependencies seen on the Flow Map?  e.g. Tier A calls Tier B with HTTP,  Tier C calls these specific backends on nnn ports.  Or some utility that recursively "walks" the tree of Tiers/Nodes/Backends using the Application Model API calls?