All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This was it. Thank you for the assist.
Fixed (swapped).
Thanks for the insight. Does this also mean that, once authenticated with hypothetically stolen account credentials, an attacker could retrieve plaintext passwords simply with REST commands?
Making some assumptions here (because as @PickleRick says, your sample data is confusing, and a raw sample in a code block would be a better way to present it), but have you tried using extract? | r... See more...
Making some assumptions here (because as @PickleRick says, your sample data is confusing, and a raw sample in a code block would be a better way to present it), but have you tried using extract? | rename CompLog as _raw | extract
1. You're pasting some ambiguous description of your data. Show us a sample of raw event(s). Anonymized if needed. 2. You don't have to escape the equals sign. It's in no way special in regex. 3. Y... See more...
1. You're pasting some ambiguous description of your data. Show us a sample of raw event(s). Anonymized if needed. 2. You don't have to escape the equals sign. It's in no way special in regex. 3. You should escape quotes though. Not because of regex but because the regex is passed as string within quotes. 4. As I said in p.1 it's not entirely clear what your data looks like but if the order of the fields is fixed (otherwise you need to extract each one separately), the  typical approach would be something like this: | rex "Field1=\\s*(?<Field1>[^,]*),\\s*Field2=\\s*(?<Field2>[^,]*)" and so on. Generally you anchor your regex with a fixed text (field name, comma) and capture everything in between. Notice double backslashes since it's a string argument. If you're sure you won't have spaces, you can drop that whitespace matching part. And as always - you can test your regex at https://regex101.com/
I have the below search and I want to modify it to get the bandwidth utilization percentage. Whats the best way to go about that and what should I add to my search?   index=snmp sourcetype=snmp_att... See more...
I have the below search and I want to modify it to get the bandwidth utilization percentage. Whats the best way to go about that and what should I add to my search?   index=snmp sourcetype=snmp_attributes Name=ifHCInOctets host=xyz | streamstats current=t global=f window=2 range(Value) AS delta BY UID | eval mbpsIn=delta*8/1024/1024 | append [search index=snmp sourcetype=snmp_attributes Name=ifHCOutOctets host=xyz | streamstats current=t global=f window=2 range(Value) AS delta BY UID | eval mbpsOut=delta*8/1024/1024 ] | search UID=1 | timechart span=5m per_second(mbpsIn) AS MbpsIn per_second(mbpsOut) AS MbpsOut BY UID
Hey Will,    the hostname has been changed for over 7 days now.    |rest splunk_server=local /services/server/info | table host returns the correct (updated) hostname, and it does match the host... See more...
Hey Will,    the hostname has been changed for over 7 days now.    |rest splunk_server=local /services/server/info | table host returns the correct (updated) hostname, and it does match the hostname of the search query index=_internal source=*license_usage.log*   License Manager report, the drop down in the top-left corner is still somehow uses the old (amazon assigned) hostname that is not reflecting the name change.  
Need help cleaning up my rex command line with data delineated by (,) then extracting the value after the (=) character from fields: Location=, Computer=, User=, Date= Sample index data:         In... See more...
Need help cleaning up my rex command line with data delineated by (,) then extracting the value after the (=) character from fields: Location=, Computer=, User=, Date= Sample index data:         Index = computerlogs         Field name: CompLog Field values:          Loc=Warehouse, Comp=WH-SOC01, User= username1, Date=2025-03-18         Loc=Warehouse, Comp=WH-SOC02, User= username2, Date=2025-03-20         Loc=Warehouse, Comp=WH-SOC03, User= username1, Date=2025-03-24 Created a dashboard showing all logins with only computer name, user and date as below Working query         index= computerlogs        | rex field=CompLog"([^,]+,){1}(?<LogComp>[^,]+)"        | rex field=LogComp "\=(?<Computer>[^,]+)"        | rex field=CompLog"([^,]+,){2}(?<LogUser>[^,]+)"        | rex field=LogUser"\=(?<User>[^,]+)"        | rex field=CompLog"([^,]+,){3}(?<LogDate>[^,]+)"        | rex field=LogDate "\=(?<Date>[^,]+)"        | table Computer User Date        Computer          User                  Date        WH-SOC01       username1    2025-03-18        WH-SOC02       username2    2025-03-20        WH-SOC03       username1    2025-03-24 My ask is to clean up the above rex commands, so I only have one rex command line for each data field I am trying to capture if it is possible.  I tried to combine the two rex command lines into one.  I know I need to add the "\=" argument to get everything after the "=" character but get an error with my below try's.       | rex field=CompLog"([^,]+,){1}\=(?<Computer>[^,]+)"       | rex field=CompLog"([^,]+,){1}"\=(?<Computer>"[^,]+)"       | rex field=CompLog"([^,]+,){1}"\=(?<Computer>[^",]+)" Any help would be greatly appreciated.  Thanks.
Hi  The license dashboard often uses the RolloverSummary data, which is generated at midnight (usually). If its been less than 24 hours then this could be why you are seeing no data. Also, the searc... See more...
Hi  The license dashboard often uses the RolloverSummary data, which is generated at midnight (usually). If its been less than 24 hours then this could be why you are seeing no data. Also, the search uses a macro `set_local_host` which basically uses the rest endpoint in the SPL below to get the current hostname, this means it may not show the data for the previous 60 days because it will have been saved under the old hostname. When you run the following, do you get your old server name, or new one? |rest splunk_server=local /services/server/info | table host Does this match a host you see in the below? index=_internal source=*license_usage.log* Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @SrinivasuluS  Just a follow up regarding maxTotalDataSizeMB being used in some of these queries that I feel I should raise, the default value for this if unchanged is 500GB, if you have 10 index... See more...
Hi @SrinivasuluS  Just a follow up regarding maxTotalDataSizeMB being used in some of these queries that I feel I should raise, the default value for this if unchanged is 500GB, if you have 10 indexes and "filled" them all then you would consume 50TB of storage, but if your storage location is smaller than 50TB (or rather, the total of your maxTotalDataSizeMB) then you will start losing older data *before* the maxTotalDataSizeMB is reached. Similarly, that value is per-indexer, so if you have 10 indexers then your "total available storage" for each index is 10*maxTotalDataSizeMB (10x500GB = 5TB) - but, if you have 3 copies of the data then you'll chew through that storage quicker, if that makes sense? Also, if you have frozenTimePeriodInSecs set to 90 days, but after 45 days you've already reached your maxTotalDataSizeMB for that index then you'll never make it to the 90 day retention you're expecting. And the same for the reverse, if maxTotalDataSizeMB is set to 90 days and after 90 days you've used 50GB, you're never going to "fill" it up maxTotalDataSizeMB (unless you start sending more data)! Essentially what I'm trying to say is, these searches might not give the answer you think you're getting unless other factors are considered [which you might already be aware of - but wanted to hightlight for others who might see these responses ] Hopefully this make sense, but if you want any further guidance on how to apply these searches to your environment then please let us know a bit more about your end-goal here, along with details of your environment (How many IDX, is it a cluster, what is your Replication and search factor, single or multi-site, any other settings changed like TSIDX reduction, presumably not using SmartStore? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi,   I have recently changed the OS hostname, followed by Splunk hostname change on a single node deployment.  I am still seeing old hostname in splunk License Manager reports, which are also sho... See more...
Hi,   I have recently changed the OS hostname, followed by Splunk hostname change on a single node deployment.  I am still seeing old hostname in splunk License Manager reports, which are also showing blanks for both today and historical license consumption.    Followed several articles on things to check, ie https://splunk.my.site.com/customer/s/article/In-the, and https://community.splunk.com/t5/Splunk-Dev/How-to-fix-incorrect-Instance-name-after-change-hostname/m-p/613316 Same outcome - the license manager still showing incorrect hostname in dropdown, and license usage stats are not reflected in the UX   If someone could please provide additional guidance on things I can check?
REGISTER HERE Tuesday, April 8, 2025  |  9AM–9:30AM PT Pizza Hut's Story of a Successful Migration for Greater Reliability & Resilience Many organizations are struggling with observability solu... See more...
REGISTER HERE Tuesday, April 8, 2025  |  9AM–9:30AM PT Pizza Hut's Story of a Successful Migration for Greater Reliability & Resilience Many organizations are struggling with observability solutions that promise a lot but deliver little - leading to high costs, incomplete insights, and slower innovation. Instead of driving digital transformation, organizations find themselves bogged down by unexpected overages and fragmented tools, and siloed teams. But a better way to observability is here. Splunk Observability is designed to meet customer needs for a unified, flexible observability solution and evolve with them as they scale. In this webinar, you’ll hear firsthand from the Pizza Hut team who made the switch to Splunk Observability, transforming their digital strategy to achieve greater reliability, resilience, and business impact. Join our webinar to learn: The digital transformation goals that drove an organization to seek a better observability solution. Key differentiators that make Splunk the right choice for modern businesses. Real-world examples showcasing improved performance, reliability, and efficiency after customers migrate to Splunk. How Splunk can help simplify your observability journey, improve collaboration across teams and drive better outcomes for your business. This is a can’t-miss webinar! Register now and join us.
Tuesday, April 8, 2025  |  9AM–9:30AM PT Pizza Hut's Story of a Successful Migration for Greater Reliability & Resilience Many organizations are struggling with observability solutions that promi... See more...
Tuesday, April 8, 2025  |  9AM–9:30AM PT Pizza Hut's Story of a Successful Migration for Greater Reliability & Resilience Many organizations are struggling with observability solutions that promise a lot but deliver little - leading to high costs, incomplete insights, and slower innovation. Instead of driving digital transformation, organizations find themselves bogged down by unexpected overages and fragmented tools, and siloed teams. But a better way to observability is here. Splunk Observability is designed to meet customer needs for a unified, flexible observability solution and evolve with them as they scale. In this webinar, you’ll hear firsthand from the Pizza Hut team who made the switch to Splunk Observability, transforming their digital strategy to achieve greater reliability, resilience, and business impact. REGISTER NOW!
list_storage_passwords is a capability. That means users with a role providing this capability will be able to show credentials in the credential-store for any app, where the have read-access to the ... See more...
list_storage_passwords is a capability. That means users with a role providing this capability will be able to show credentials in the credential-store for any app, where the have read-access to the credential-store. In example. If the app has this metadata/default.meta: [] access = read : [*], write : [admin] Anyone can read the passwords/credentials in this app, if the user has the "list_storage_passwords" capability. If an app has this default.meta [] access = read : [*], write : [admin] [passwords] access = read : [ trustworthy_role, admin ], write [admin] Only users with the trustworthy role can see the credentials. Make sure you restrict app-access to the people which need the access. That way you can give list_storage_credentials to roles with the risk of having people access credentials, which they are not supposed to access.  You can also set the read/write access more granulary with [passwords/credential%3Arealmname%3Ausername%3A] Read more here: https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/secretstorage/secretstoragerbac  
@SrinivasuluS  Query 1: Query to get total size occupied/consumed by each index and remaining space available | rest /services/data/indexes | table title currentDBSizeMB maxTotalDataSizeMB | e... See more...
@SrinivasuluS  Query 1: Query to get total size occupied/consumed by each index and remaining space available | rest /services/data/indexes | table title currentDBSizeMB maxTotalDataSizeMB | eval remainingSpaceMB = maxTotalDataSizeMB - currentDBSizeMB | rename title AS "Index Name", currentDBSizeMB AS "Current Size (MB)", maxTotalDataSizeMB AS "Max Size (MB)", remainingSpaceMB AS "Remaining Space (MB)"   Query 2: To get the total size occupied by each index since the date of onboarding, you can use the following query | dbinspect index=* | stats sum(sizeOnDiskMB) as TotalSizeMB by index | eval TotalSizeGB = round(TotalSizeMB / 1024, 2) | table index, TotalSizeGB   Query3: To find the remaining space available for each index, you can use | rest /services/data/indexes | table title, currentDBSizeMB, maxTotalDataSizeMB | eval remainingSpaceMB = maxTotalDataSizeMB - currentDBSizeMB | eval remainingSpaceGB = round(remainingSpaceMB / 1024, 2) | table title, remainingSpaceGB Query 4: This query gives the total data size consumed per index from the time of onboarding till now (based on _indextime) and remaining space if your Splunk limits are set for each index | dbinspect index=* | stats sum(rawSize) AS total_size_in_bytes by index | eval total_size_in_gb=round(total_size_in_bytes/1024/1024/1024,2) Query 5: | dbinspect index=* | search tsidxState="full" bucketId=* | eval ageDays=round((endEpoch-startEpoch)/84000,10) | stats min(startEpoch) as MinStartTime max(startEpoch) as MaxStartTime min(endEpoch) as MinEndTime max(endEpoch) as MaxEndTime max(hostCount) as MaxHosts max(sourceTypeCount) as MaxSourceTypes sum(eventCount) as TotalEvents sum(rawSize) as rawSizeBytes sum(sizeOnDiskMB) as sizeOnDiskBytes values(ageDays) as ageDays dc(bucketId) as countBuckets by index bucketId, state | where ageDays<90 AND ageDays>0.0000000000 | eval sizeOnDiskBytes=round(sizeOnDiskBytes*pow(1024,2)) | eval dailyDisk=round(sizeOnDiskBytes/ageDays,5) | eval dailyRaw=round(rawSizeBytes/ageDays,5) | eval dailyEventCount=round(TotalEvents/ageDays) | table index bucketId state dailyDisk ageDays rawSizeBytes, sizeOnDiskBytes TotalEvents PercentSizeReduction dailyRaw dailyEventCount ageDays | stats sum(dailyDisk) as dailyBDiskBucket, values(ageDays), sum(dailyRaw) as dailyBRaw sum(dailyEventCount) as dailyEvent, avg(dailyDisk) as dailyBDiskAvg, avg(dailyRaw) as dailyBRawAvg, avg(dailyEventCount) as dailyEventAvg, dc(bucketId) as countBucket by index, state, ageDays | eval bPerEvent=round(dailyBDiskBucket/dailyEvent) | eval bPerEventRaw=round(dailyBRaw/dailyEvent) | table dailyBDiskBucket index ageDays dailyEvent bPerEvent dailyBRaw bPerEventRaw state | sort ageDays | stats sum(dailyBDiskBucket) as Vol_totDBSize, avg(dailyBDiskBucket) as Vol_avgDailyIndexed, max(dailyBDiskBucket) as Vol_largestVolBucket, avg(dailyEvent) as avgEventsPerDay, avg(bPerEvent) as Vol_avgVolPerEvent, avg(dailyBRaw) as Vol_avgDailyRawVol, avg(bPerEventRaw) as Vol_avgVolPerRawEvent, range(ageDays) as rangeAge by index, state | foreach Vol_* [eval <<FIELD>>=if(<<FIELD>> >= pow(1024,3), tostring(round(<<FIELD>>/pow(1024,3),3))+ " GB", if(<<FIELD>> >= pow(1024,2), tostring(round(<<FIELD>>/pow(1024,2),3))+ " MB", if(<<FIELD>> >= pow(1024,1), tostring(round(<<FIELD>>/pow(1024,2),3))+ " KB", tostring(round(<<FIELD>>)) + " bytes")))] | rename Vol_* as * | eval comb="Index Avg/day: " + avgDailyIndexed + "," + "Raw Avg/day: " + avgDailyRawVol + "," + "DB Size: " + totDBSize + "," + "Per Event Avg/Vol: " + avgVolPerEvent + "," + "Retention Range: " + tostring(round(rangeAge)) | eval comb = split(comb,",") | xyseries index state comb | table index hot warm cold    
@doniaelansasy  What are your index settings? It could just be that the data retention period expired , perhaps there was too much data, or it was too old, causing the oldest events to be pushed out... See more...
@doniaelansasy  What are your index settings? It could just be that the data retention period expired , perhaps there was too much data, or it was too old, causing the oldest events to be pushed out of the indexes. 
Usually at the forwarding layer, the TA extracts metadata fields such as time, and parses data according to props and transforms of every TA.  Once extracted, the forwarders sends cooked data to ind... See more...
Usually at the forwarding layer, the TA extracts metadata fields such as time, and parses data according to props and transforms of every TA.  Once extracted, the forwarders sends cooked data to indexing tier. But essentially forwarders can act as a simple log redirector or an indexer (in terms of extracting data) depending on your configuration. Also, Splunk have other type of fields that are extracted at search time. Basically, you store your "raw" log and extract the fields when you do the search. This is why sometimes the TA must be installed at Search tier (your cloud instance). Otherwise this kind of calculated or lookup fields wont work. Please follow the documentation of each Technical add-on to know if you need to install it in the search tier, it often will be.
Hi @doniaelansasy  Did you make any changes to your indexes.conf ? Is it possible something else could have changed in the system? Changing the configuration of your inputs.conf can not result in e... See more...
Hi @doniaelansasy  Did you make any changes to your indexes.conf ? Is it possible something else could have changed in the system? Changing the configuration of your inputs.conf can not result in existing data being removed. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @SrinivasuluS  I wonder if the following should work for you? Or at least help get some way there, Run this search across "All Time" to get the size since the start of your onboarding. | dbinspe... See more...
Hi @SrinivasuluS  I wonder if the following should work for you? Or at least help get some way there, Run this search across "All Time" to get the size since the start of your onboarding. | dbinspect index=* | stats sum(sizeOnDiskMB) as sizeOnDiskMB by index This gives the sizeOnDisk in MB for each index, the amount of available space might depend on your configuration. Do you have a single storage location/volume for all indexes and both hot/warm/cold buckets? Do you know the size of your disk or do you need to look in Splunk for this info to? This will help build out a final query for you to use. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Something else must have happened then. Splunk on its own only deletes data in two cases: 1) You use the delete command (actually even then they are not physically deleted from the index files, just... See more...
Something else must have happened then. Splunk on its own only deletes data in two cases: 1) You use the delete command (actually even then they are not physically deleted from the index files, just marked as unsearchable but the net effect is the same - you can't search for those events). 2) They are rolled to frozen due to data retention policy. And generally, any change on forwarders will not cause changes on indexers and/or search heads. They are separate components so if you only pushed the configs to forwarders, there's no way it should cause "disappearance" of your events. Maybe other changes were introduced around the same time. Most importantly, do you have permissions for the index?