All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I've seen the repFactor set to auto or 0. I'm changing all the non-internal indexes to auto, (adding the line repFactor to the stanzas that are missing them.  RF and SF are 2. I have a Single site ... See more...
I've seen the repFactor set to auto or 0. I'm changing all the non-internal indexes to auto, (adding the line repFactor to the stanzas that are missing them.  RF and SF are 2. I have a Single site cluster with 6 indexers. 
Hello Splunk Community, I need to find out how many upgrades were performed to systems and unsure how to best proceed. The data is similar to what is listed below: _time hostname system model... See more...
Hello Splunk Community, I need to find out how many upgrades were performed to systems and unsure how to best proceed. The data is similar to what is listed below: _time hostname system model version 2025-01-01 a x x 15.2(8) 2025-01-01 b y y 15.3(5) 2025-01-02 a x x 15.3(5)   There are thousands of systems with various versions. I am trying to find a way to capture devices that have gone from one version to a newer one indicating an upgrade took place. Multiple upgrades could have occurred over time for a single device and those need to be accounted for as well. Any help suggesting where to start looking into what to use would be greatly appreciated. Thanks. -E
You should create an own app which contains all those needed certs. If you have Splunk Cloud in use you can copy the idea from its Universal Forwarder app. Of course it needs that you have added your... See more...
You should create an own app which contains all those needed certs. If you have Splunk Cloud in use you can copy the idea from its Universal Forwarder app. Of course it needs that you have added your own private CA.pem into Splunk's CA certs file if you have this in use.
Here is one docs page which told how those steps are done and what are order of those https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Searchtimeoperationssequence. You can see this aft... See more...
Here is one docs page which told how those steps are done and what are order of those https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Searchtimeoperationssequence. You can see this after you have run your search by clicking Jobs link -> Inspect Job and  then open search.log There are several .conf presentations and splunk blogs how to use this information.
did you wind up getting a good solution in place for pushing new certs from the deployment server?  
If you have so called golden image which are created incorrectly (it contains those critical configurations), You could read this old post https://community.splunk.com/t5/Installation/EC2-from-AMI-ha... See more...
If you have so called golden image which are created incorrectly (it contains those critical configurations), You could read this old post https://community.splunk.com/t5/Installation/EC2-from-AMI-having-splunk-installed-stops-working/m-p/669633#M13418 r. Ismo
Thankyou ! From your help I made it work with alternate solution. Props.conf [sourcetype] TRANSFORMS-extract_data = rename_data_to_event   Transform.conf [rename_data_to_event] REGEX = "dat... See more...
Thankyou ! From your help I made it work with alternate solution. Props.conf [sourcetype] TRANSFORMS-extract_data = rename_data_to_event   Transform.conf [rename_data_to_event] REGEX = "data":\s*({.*?}) FORMAT = $1 WRITE_META= true DEST_KEY = _raw really Appreicate your help @livehybrid 
Hi @bhavesh0124  Im not able to test this directly at the moment, but the following might work for you! == props.conf == [yourSourcetypeName] TRANSFORMS-extractRaw = extractHECRaw == transforms.c... See more...
Hi @bhavesh0124  Im not able to test this directly at the moment, but the following might work for you! == props.conf == [yourSourcetypeName] TRANSFORMS-extractRaw = extractHECRaw == transforms.conf == [extractHECRaw] INGEST_EVAL = _raw:=json_extract(_raw,"data") This should extract the data section of the JSON and assign it to _raw. If you need to extract the index/source then you can do this before setting the new _raw value.    Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Earlier there was also restrictions that upside versions must be higher than downside, but this has removed on 9.x. This means that UF's versions cannot be higher than HF/Indexers and also HF's canno... See more...
Earlier there was also restrictions that upside versions must be higher than downside, but this has removed on 9.x. This means that UF's versions cannot be higher than HF/Indexers and also HF's cannot be higher versions than Indexers. I suppose that this was some kind of warranty for splunk to avoid some weird issues. I know that in many times those versions works w/o issues even UF has higher versions than IDXs have. 
How about this, if you don't need to get those immediately with your 1st search. Just make you search. Then click correct event and open it from > mark in beginning of event then click _time fields ... See more...
How about this, if you don't need to get those immediately with your 1st search. Just make you search. Then click correct event and open it from > mark in beginning of event then click _time fields and it opens to you  Then just select correct time slot and do search again without any "matching words" like 'log_data="*error*"'
Ah yes okay @isoutamo - That is a fair point. Whilst I've had success with this previously, there is no guarantee it will go the same way for @AviSharma8 !  The Remote updated app simply has the fol... See more...
Ah yes okay @isoutamo - That is a fair point. Whilst I've had success with this previously, there is no guarantee it will go the same way for @AviSharma8 !  The Remote updated app simply has the following check target_major_version <= current_major_version+1 and when I ran it was happy to do 8.0 -> 9.4! Nevertheless, I will update my original post and point at the official stance on this. According to the 8.2 docs its possible to upgrade a UF from 7.3->8.2 (Upgrading a universal forwarder directly to version 8.2 is supported from versions 7.3.x, 8.0.x, and 8.1.x) and 9.4 supports an upgrade from 8.2 (Splunk supports a direct upgrade of a universal forwarder to version 9.4 from versions 8.2.x and higher of the universal forwarder.) In the meantime - Its also worth mentioning that Splunk Enterprise version 9.0 and higher requires Linux kernel version 3.x or higher and has an updated OS support list -  Check supported OS at https://docs.splunk.com/Documentation/Splunk/9.4.1/Installation/Systemrequirements   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Thankyou for your prompt response. @livehybrid  I have no control over the source JSON unfortunately.   I tried sending it inside the raw HEC endpoint. It works flawlessly. However the data shows ... See more...
Thankyou for your prompt response. @livehybrid  I have no control over the source JSON unfortunately.   I tried sending it inside the raw HEC endpoint. It works flawlessly. However the data shows up like this nested json. Any way to make it nice and tidy?   { [-] data: { [-] message: This is New test severity: info } index: test sourcetype: test:json source: telemetry }    
If i recall correctly there are some versions which could have some issues with ingesting data (at least in UF side, but you have HF).  The best option to get more information is look you _internal ... See more...
If i recall correctly there are some versions which could have some issues with ingesting data (at least in UF side, but you have HF).  The best option to get more information is look you _internal logs and try to get information what has happened when (and just before) this issue has arise. As @livehybrid said try 1st figure out is the issue has been on receive or send side or even an indexers? https://community.splunk.com/t5/Getting-Data-In/Splunk-Indexer-Parsing-Queue-Blocking/td-p/583312 one old post which could related to this issue or at least it contains some useful links.
Thankyou for your prompt response. I have no control over the source JSON unfortunately. I tried sending it inside the raw HEC endpoint. It works flawlessly. However the data shows up like this nes... See more...
Thankyou for your prompt response. I have no control over the source JSON unfortunately. I tried sending it inside the raw HEC endpoint. It works flawlessly. However the data shows up like this nested json. Any way to make it nice and tidy? { [-] data: { [-] message: This is New test severity: info } index: test sourcetype: test:json source: telemetry }    
You should remember that there are another way to use that lookup table than just "add" it's name into your search! It can be used as automatic lookup, via lookup command, input/outputlookup and ev... See more...
You should remember that there are another way to use that lookup table than just "add" it's name into your search! It can be used as automatic lookup, via lookup command, input/outputlookup and even DMs can use it. For that reason you need to dig this little bit deeper to get all those usages. I'm not 100% sure if all those are reported into _audit log or not (I expecting that not)? It could even need that you somehow look users' search.log to see how splunk has expanded e.g. automatic lookups etc.
What is your repFactor on indexes.conf file for those indexes? And have you on multisite or single site cluster? And what are your RF + SF and site factors if you have multisite cluster?
Actually they said "it could update from 8.0 to 9.0+" but it didn't said "it could upgrade directly from 8.0 to 9.0+". And in https://docs.splunk.com/Documentation/Forwarder/1.0.0/ForwarderRemoteUpgr... See more...
Actually they said "it could update from 8.0 to 9.0+" but it didn't said "it could upgrade directly from 8.0 to 9.0+". And in https://docs.splunk.com/Documentation/Forwarder/1.0.0/ForwarderRemoteUpgradeLinux/Architecture they said "... validates the universal forwarder migration path from the current version to the destination version." UF contains splunk some dbs (e.g. fishbucket in /opt/splunkforwarder/var/lib/splunk/). Time by time they change somehow (I don't know exactly how) the internals of db structure. Those changes must apply into those DBs when you are upgrading UFs. As I said there could be some cases where this is needed but I'm quite sure that updating from 7.3 -> 9.4 is not belonging to that sets.
I picked a bid and searched for it. The only events are about its creation. Then the errors immediately start   I checked three other bids with the same results. I even see messages about moving ... See more...
I picked a bid and searched for it. The only events are about its creation. Then the errors immediately start   I checked three other bids with the same results. I even see messages about moving from hot to warm. ("Cleaning up usage" events excluded in these search results.)
This seems to work on search app  index=_internal source=*license_usage.log* (host=*.splunk*.* NOT (host=sh-* host=*.splunk*.*)) TERM("type=RolloverSummary") | rex field=_raw "^(?<timestring>\d\d-\... See more...
This seems to work on search app  index=_internal source=*license_usage.log* (host=*.splunk*.* NOT (host=sh-* host=*.splunk*.*)) TERM("type=RolloverSummary") | rex field=_raw "^(?<timestring>\d\d-\d\d-\d{4}\s\d\d:\d\d:\d\d.\d{3}\s\+\d{4})" | eval _time=strptime(timestring,"%m-%d-%Y %H:%M:%S.%N%z") | eval z=strftime(now(),"%z") | eval m=substr(z,-2) | eval h=substr(z,2,2) | eval mzone=if(z != 0, ((h*60)+m)*(z/abs(z)), 0) | eval min_to_utc=-1440-mzone | eval rel_time=min_to_utc."m" | eval _time=relative_time(_time, rel_time) + 1 | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=false | eval GB=round(volume/pow(2,30),3) | append [| search (index=_cmc_summary OR index=summary) source="splunk-entitlements" | rex field=host "^[^.]+[.](?<stack>[^.]+)" | search [| rest /services/server/info splunk_server=local | fields splunk_server | rex field=splunk_server "^[^.]+[.](?<stack>[^.]+)" | fields stack] | rex field=_raw "^(?<timestring>\d\d/\d\d/\d{4}\s\d\d:\d\d:\d\d\s\+\d{4})" | eval _time=strptime(timestring,"%m/%d/%Y %H:%M:%S %z") | eval z=strftime(now(),"%z") | eval m=substr(z,-2) | eval h=substr(z,2,2) | eval mzone=if(z != 0, ((h*60)+m)*(z/abs(z)), 0) | eval min_to_utc=-1440-mzone | eval rel_time=min_to_utc."m" | eval _time=relative_time(_time, rel_time) | bin _time span=1d | stats max(ingest_license) as "license limit" by _time] | stats values(*) as * by _time | fields - volume
You can use Ctrl+Shift+e to expand macros on SPL window. Or use Cmd+Shift+e on macOS. In that way you can expand your CMC's query in CMC app and then copy+modify those in your own app.