All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thankyou ! From your help I made it work with alternate solution. Props.conf [sourcetype] TRANSFORMS-extract_data = rename_data_to_event   Transform.conf [rename_data_to_event] REGEX = "dat... See more...
Thankyou ! From your help I made it work with alternate solution. Props.conf [sourcetype] TRANSFORMS-extract_data = rename_data_to_event   Transform.conf [rename_data_to_event] REGEX = "data":\s*({.*?}) FORMAT = $1 WRITE_META= true DEST_KEY = _raw really Appreicate your help @livehybrid 
Hi @bhavesh0124  Im not able to test this directly at the moment, but the following might work for you! == props.conf == [yourSourcetypeName] TRANSFORMS-extractRaw = extractHECRaw == transforms.c... See more...
Hi @bhavesh0124  Im not able to test this directly at the moment, but the following might work for you! == props.conf == [yourSourcetypeName] TRANSFORMS-extractRaw = extractHECRaw == transforms.conf == [extractHECRaw] INGEST_EVAL = _raw:=json_extract(_raw,"data") This should extract the data section of the JSON and assign it to _raw. If you need to extract the index/source then you can do this before setting the new _raw value.    Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Earlier there was also restrictions that upside versions must be higher than downside, but this has removed on 9.x. This means that UF's versions cannot be higher than HF/Indexers and also HF's canno... See more...
Earlier there was also restrictions that upside versions must be higher than downside, but this has removed on 9.x. This means that UF's versions cannot be higher than HF/Indexers and also HF's cannot be higher versions than Indexers. I suppose that this was some kind of warranty for splunk to avoid some weird issues. I know that in many times those versions works w/o issues even UF has higher versions than IDXs have. 
How about this, if you don't need to get those immediately with your 1st search. Just make you search. Then click correct event and open it from > mark in beginning of event then click _time fields ... See more...
How about this, if you don't need to get those immediately with your 1st search. Just make you search. Then click correct event and open it from > mark in beginning of event then click _time fields and it opens to you  Then just select correct time slot and do search again without any "matching words" like 'log_data="*error*"'
Ah yes okay @isoutamo - That is a fair point. Whilst I've had success with this previously, there is no guarantee it will go the same way for @AviSharma8 !  The Remote updated app simply has the fol... See more...
Ah yes okay @isoutamo - That is a fair point. Whilst I've had success with this previously, there is no guarantee it will go the same way for @AviSharma8 !  The Remote updated app simply has the following check target_major_version <= current_major_version+1 and when I ran it was happy to do 8.0 -> 9.4! Nevertheless, I will update my original post and point at the official stance on this. According to the 8.2 docs its possible to upgrade a UF from 7.3->8.2 (Upgrading a universal forwarder directly to version 8.2 is supported from versions 7.3.x, 8.0.x, and 8.1.x) and 9.4 supports an upgrade from 8.2 (Splunk supports a direct upgrade of a universal forwarder to version 9.4 from versions 8.2.x and higher of the universal forwarder.) In the meantime - Its also worth mentioning that Splunk Enterprise version 9.0 and higher requires Linux kernel version 3.x or higher and has an updated OS support list -  Check supported OS at https://docs.splunk.com/Documentation/Splunk/9.4.1/Installation/Systemrequirements   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Thankyou for your prompt response. @livehybrid  I have no control over the source JSON unfortunately.   I tried sending it inside the raw HEC endpoint. It works flawlessly. However the data shows ... See more...
Thankyou for your prompt response. @livehybrid  I have no control over the source JSON unfortunately.   I tried sending it inside the raw HEC endpoint. It works flawlessly. However the data shows up like this nested json. Any way to make it nice and tidy?   { [-] data: { [-] message: This is New test severity: info } index: test sourcetype: test:json source: telemetry }    
If i recall correctly there are some versions which could have some issues with ingesting data (at least in UF side, but you have HF).  The best option to get more information is look you _internal ... See more...
If i recall correctly there are some versions which could have some issues with ingesting data (at least in UF side, but you have HF).  The best option to get more information is look you _internal logs and try to get information what has happened when (and just before) this issue has arise. As @livehybrid said try 1st figure out is the issue has been on receive or send side or even an indexers? https://community.splunk.com/t5/Getting-Data-In/Splunk-Indexer-Parsing-Queue-Blocking/td-p/583312 one old post which could related to this issue or at least it contains some useful links.
Thankyou for your prompt response. I have no control over the source JSON unfortunately. I tried sending it inside the raw HEC endpoint. It works flawlessly. However the data shows up like this nes... See more...
Thankyou for your prompt response. I have no control over the source JSON unfortunately. I tried sending it inside the raw HEC endpoint. It works flawlessly. However the data shows up like this nested json. Any way to make it nice and tidy? { [-] data: { [-] message: This is New test severity: info } index: test sourcetype: test:json source: telemetry }    
You should remember that there are another way to use that lookup table than just "add" it's name into your search! It can be used as automatic lookup, via lookup command, input/outputlookup and ev... See more...
You should remember that there are another way to use that lookup table than just "add" it's name into your search! It can be used as automatic lookup, via lookup command, input/outputlookup and even DMs can use it. For that reason you need to dig this little bit deeper to get all those usages. I'm not 100% sure if all those are reported into _audit log or not (I expecting that not)? It could even need that you somehow look users' search.log to see how splunk has expanded e.g. automatic lookups etc.
What is your repFactor on indexes.conf file for those indexes? And have you on multisite or single site cluster? And what are your RF + SF and site factors if you have multisite cluster?
Actually they said "it could update from 8.0 to 9.0+" but it didn't said "it could upgrade directly from 8.0 to 9.0+". And in https://docs.splunk.com/Documentation/Forwarder/1.0.0/ForwarderRemoteUpgr... See more...
Actually they said "it could update from 8.0 to 9.0+" but it didn't said "it could upgrade directly from 8.0 to 9.0+". And in https://docs.splunk.com/Documentation/Forwarder/1.0.0/ForwarderRemoteUpgradeLinux/Architecture they said "... validates the universal forwarder migration path from the current version to the destination version." UF contains splunk some dbs (e.g. fishbucket in /opt/splunkforwarder/var/lib/splunk/). Time by time they change somehow (I don't know exactly how) the internals of db structure. Those changes must apply into those DBs when you are upgrading UFs. As I said there could be some cases where this is needed but I'm quite sure that updating from 7.3 -> 9.4 is not belonging to that sets.
I picked a bid and searched for it. The only events are about its creation. Then the errors immediately start   I checked three other bids with the same results. I even see messages about moving ... See more...
I picked a bid and searched for it. The only events are about its creation. Then the errors immediately start   I checked three other bids with the same results. I even see messages about moving from hot to warm. ("Cleaning up usage" events excluded in these search results.)
This seems to work on search app  index=_internal source=*license_usage.log* (host=*.splunk*.* NOT (host=sh-* host=*.splunk*.*)) TERM("type=RolloverSummary") | rex field=_raw "^(?<timestring>\d\d-\... See more...
This seems to work on search app  index=_internal source=*license_usage.log* (host=*.splunk*.* NOT (host=sh-* host=*.splunk*.*)) TERM("type=RolloverSummary") | rex field=_raw "^(?<timestring>\d\d-\d\d-\d{4}\s\d\d:\d\d:\d\d.\d{3}\s\+\d{4})" | eval _time=strptime(timestring,"%m-%d-%Y %H:%M:%S.%N%z") | eval z=strftime(now(),"%z") | eval m=substr(z,-2) | eval h=substr(z,2,2) | eval mzone=if(z != 0, ((h*60)+m)*(z/abs(z)), 0) | eval min_to_utc=-1440-mzone | eval rel_time=min_to_utc."m" | eval _time=relative_time(_time, rel_time) + 1 | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=false | eval GB=round(volume/pow(2,30),3) | append [| search (index=_cmc_summary OR index=summary) source="splunk-entitlements" | rex field=host "^[^.]+[.](?<stack>[^.]+)" | search [| rest /services/server/info splunk_server=local | fields splunk_server | rex field=splunk_server "^[^.]+[.](?<stack>[^.]+)" | fields stack] | rex field=_raw "^(?<timestring>\d\d/\d\d/\d{4}\s\d\d:\d\d:\d\d\s\+\d{4})" | eval _time=strptime(timestring,"%m/%d/%Y %H:%M:%S %z") | eval z=strftime(now(),"%z") | eval m=substr(z,-2) | eval h=substr(z,2,2) | eval mzone=if(z != 0, ((h*60)+m)*(z/abs(z)), 0) | eval min_to_utc=-1440-mzone | eval rel_time=min_to_utc."m" | eval _time=relative_time(_time, rel_time) | bin _time span=1d | stats max(ingest_license) as "license limit" by _time] | stats values(*) as * by _time | fields - volume
You can use Ctrl+Shift+e to expand macros on SPL window. Or use Cmd+Shift+e on macOS. In that way you can expand your CMC's query in CMC app and then copy+modify those in your own app.
Hi @bhavesh0124  Are you able to reconfigure the source so that it sends with an "event" key instead of "data"? That JSON structure is almost correct for the event HEC endpoint, meaning it will use... See more...
Hi @bhavesh0124  Are you able to reconfigure the source so that it sends with an "event" key instead of "data"? That JSON structure is almost correct for the event HEC endpoint, meaning it will use the index/sourcetype/source values etc in the JSON payload and the index the "event" key as the _raw field. If you arent able to correct this at source then you will need to use the "raw" HEC endpoint and then do a chunky amount of props/transforms to extract the relevant index/source/sourcetype from the event and re-write the data content into the _raw field... This is less than ideal but possible but may well be easier to adjust the source which is sending it incorrectly? Check out Format events for HTTP Event Collector    Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
@isoutamo  I also presumed the 12k_line.csv would have > 10,000 events (probably 12,000!). I dont think this should be an issue here though as append supports 50,000 events by default? @hank72 Plea... See more...
@isoutamo  I also presumed the 12k_line.csv would have > 10,000 events (probably 12,000!). I dont think this should be an issue here though as append supports 50,000 events by default? @hank72 Please let us know if you have any trouble with the provided search or if I've got the wrong end of your requirements. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.  
Hi @isoutamo  I'm a little confused here as I was under the impression UFs were pretty stateless. They dont have Python, KVStore and do not locally index data? Compared to HF or other full Splunk En... See more...
Hi @isoutamo  I'm a little confused here as I was under the impression UFs were pretty stateless. They dont have Python, KVStore and do not locally index data? Compared to HF or other full Splunk Enterprise instances which definitely need to be updated to specific versions incrementally. I've updated countless UFs From 7->9 without issue but happy to update my previous post if needed.  Looking at the remote UF updater (https://docs.splunk.com/Documentation/Forwarder/1.0.0/ForwarderRemoteUpgradeLinux/Supporteduniversalforwarderversions) This supports a minimum version 8.0.0 and upgrades directly to 9.x so I am content that this is feasible. I know that for non-UF hosts there is a pretty strict upgrade path,  
You could run on SH's command line  /opt/splunk/bin/splunk package app <app name> to merge and export app. 
Whoops! Sorry, read that too quickly as 2025  I see V2.0.2 previously had cloud compatibility so presume it was dropped due to not being updated to new versions of Splunk SDK etc. Hopefully they w... See more...
Whoops! Sorry, read that too quickly as 2025  I see V2.0.2 previously had cloud compatibility so presume it was dropped due to not being updated to new versions of Splunk SDK etc. Hopefully they will get back to you and someone will update it soon    
Actually You cannot update it directly from old to new unless it match those restrictions which are defined for for splunk servers too! Usually this means that you can jump over one version like 7.3.... See more...
Actually You cannot update it directly from old to new unless it match those restrictions which are defined for for splunk servers too! Usually this means that you can jump over one version like 7.3.x -> 8.1.x -> 9.0.x -> 9.2.x -> 9.4.x. Also you must start UF on each steps for updating e.g. fishbucket DB and other things which has changed between versions and need some internal updates. Of course you could remove old UF installation and install the newest versions from scratch into it. But then you need remember that this means: You lost your UF's GUID => you will get a new UF into sever point of view. Of course you can use same GUIF in .../etc/instance.cfg and keep old UF information in sever side splunk.secret will change which means that if you have any secrets/passwords in your old configurations and you try to use those you need to use plain versions and give UF crypt those again You lost information where your inputs are as you lost fishbucketdb which keep track of those => are are reingesting all files again which you have in this node  Maybe something else which I forgot? I know that updating from some version to another version could work without issues, but not for all. And those issues could arise later on, not immediately after you start a new version. I also strongly recommend you to use OS's native sw packages instead of use tar versions. With this way it's much easier manage your OS level information as you could trust your package management sw information.