All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Real_captain , when you use a base search, you have to call it in the search tag: <search base="your_base_search"> Ciao. Giuseppe
Sorry, but this is untrue. There is no change in bucket structure between warm and cold. It's just that the bucket is moved from one storage to another. I suppose from the technical point of view th... See more...
Sorry, but this is untrue. There is no change in bucket structure between warm and cold. It's just that the bucket is moved from one storage to another. I suppose from the technical point of view the buckets could go from hot "directly" to cold but it would be a bit more complicated from the Splunk internals point of view. When the hot bucket is being rolled to warm it's indexing end and it gets renamed (which is an atomic operation) within the same storage unit. Additionally, hot and warm buckets are rolled on a different basis. So technically, a bucket could roll from hot to warm because of hot bucket lifecycle parameters (especially maxDataSize) and immediately after (on next housekeeping thread pass) get rolled to cold because of reaching maximum warm buckets count.  
At least as of the time of this comment, the docs say "No" The Azure Storage Blob modular input for Splunk Add-on for Microsoft Cloud Services does not support the ingestion of gzip files. Only ... See more...
At least as of the time of this comment, the docs say "No" The Azure Storage Blob modular input for Splunk Add-on for Microsoft Cloud Services does not support the ingestion of gzip files. Only plaintext files are supported.
Hi all, I am trying to build a query to monitor my indexer rolling restart  I would like to know how much time it has taken when it has started and when it has ended , I can only see when it has sta... See more...
Hi all, I am trying to build a query to monitor my indexer rolling restart  I would like to know how much time it has taken when it has started and when it has ended , I can only see when it has started but cannot see messages on when it has been completed    INFO CMMaster [3340464 TcpChannelThread] - Starting a rolling restart of the peers.    
HI @gcusello  After using the correct fieldForLabel , i am not able to fetch the result in the dropdown using the dynamic query:  Query :  Field POH_Group1 is fetched by the base query present on... See more...
HI @gcusello  After using the correct fieldForLabel , i am not able to fetch the result in the dropdown using the dynamic query:  Query :  Field POH_Group1 is fetched by the base query present on the top with the  <search id="base"> <input type="dropdown" token="POH_token" depends="$POH_input$" searchWhenChanged="true"> <label>POH_Group</label> <fieldForLabel>POH_Group1</fieldForLabel> <fieldForValue>POH_Group1</fieldForValue> <choice value="*">All</choice> <default>*</default> <search> <query>| dedup POH_Group1 | table POH_Group1</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> </input>  
Well, it's actually _not_ a disaster recovery. It's a HA solution with some assumed level of fault tolerance. @jiaminyunThere is no such thing as "0 RPO" unless you do make some boundary conditions ... See more...
Well, it's actually _not_ a disaster recovery. It's a HA solution with some assumed level of fault tolerance. @jiaminyunThere is no such thing as "0 RPO" unless you do make some boundary conditions and prepare accordingly. A HA infrastructure like the one from SVAs can protect you in case of some disasters but will not protect you from other ones (like misconfiguration or deliberate data destroying). If you're OK with that - be my guest. Just be aware of it. RTO actually depends on your equipment, storage and resources (including personnel) you can allocate to the recovery task.
OK. Let's back up a little. You have static csv files which you uploaded to Splunk. Now you're trying to get them from Splunk with PowerBI. Why the extra step? As far as I remember you can just set u... See more...
OK. Let's back up a little. You have static csv files which you uploaded to Splunk. Now you're trying to get them from Splunk with PowerBI. Why the extra step? As far as I remember you can just set up an ODBC connection to a CSV file (in case PowerBI can't handle a raw file on its own). What's the point of dropping in Splunk as the middle man?
In Splunk's indexing process, buckets move through different stages: hot, warm, cold, and eventually frozen. The movement from hot to cold is a managed and intentional process due to the roles these ... See more...
In Splunk's indexing process, buckets move through different stages: hot, warm, cold, and eventually frozen. The movement from hot to cold is a managed and intentional process due to the roles these buckets play and their interaction with Splunk's underlying data architecture. 1. Hot Buckets: Actively Written Hot buckets are where Splunk is actively writing data. This makes them volatile because they are still receiving real-time events and may be indexed (compressed and organized) as part of ongoing ingestion. Technical Limitation: Because of their active state, they can't directly roll into cold storage, which is designed for more static, read-only data. 2. Warm Buckets: Transition to Stability Once a hot bucket reaches a certain size or the active indexing period ends, it is closed and then rolled into a warm bucket. This transition is important because warm buckets are no longer being written to, making them stable but still optimized for searching. Reason for the Warm Stage: The warm stage allows for efficient search and retrieval of data without impacting the performance of the write operations happening in hot buckets. Why Hot Can't Skip Directly to Cold Active Writing: Hot buckets are being actively written to. If they were to move directly to cold, it would require Splunk to freeze and finalize the data too early, disrupting ongoing indexing operations. Search and Performance Impact: Splunk optimizes warm buckets for active searches and allows warm data to remain in a searchable, performant state. Cold buckets, being long-term storage, are not indexed for real-time or high-performance search, making it impractical to move hot data directly into cold without this intermediary warm phase. Conclusion: The design of the bucket lifecycle (hot → warm → cold) in Splunk ensures that data remains both accessible and efficiently stored based on its usage pattern. The warm bucket stage is crucial because it marks the end of write operations while maintaining search performance before the data is pushed into more permanent, slower storage in cold buckets. Skipping this stage could cause inefficiencies and performance issues in both data ingestion and retrieval processes.
The Splunk Validated Architectures manual should help.  You may be interested in the M3/M13 or M4/M14 models.  See https://docs.splunk.com/Documentation/SVA/current/Architectures/M4M14
Hi,  I am facing the same issue, the Timed Out Reached error, but I am unable to find the IP Allow List Management-> "Search Head API Access" inside my server setting on the Splunk Cloud side. ... See more...
Hi,  I am facing the same issue, the Timed Out Reached error, but I am unable to find the IP Allow List Management-> "Search Head API Access" inside my server setting on the Splunk Cloud side.    
OK so your search for your report is discounting the data from the other sources (for some reason). What is this search? What is it doing to discount the other sources?
Actually i have to connect Splunk with power bi. And I have to save the result of search query as a report so I can connect my report with power bi.. the report is created only for the data that is ... See more...
Actually i have to connect Splunk with power bi. And I have to save the result of search query as a report so I can connect my report with power bi.. the report is created only for the data that is displayed from search query. if the search query do not display data for other table, it is not contained in the report and hence in the power bi.
That's awesome. Glad it worked for you too
Just to be clear, are you saying that your sample data (as shown) has been ingested as a single event and that there are other lines in the event which are unrelated or at least you want to ignore?
So all the data is there - what made you think you could not see it?
@ptothehil This is the resolution for me too. I downloaded it on a personal device and hashed it and it was the correct hash. When attempting to bring it onto the corporate network it is being corrup... See more...
@ptothehil This is the resolution for me too. I downloaded it on a personal device and hashed it and it was the correct hash. When attempting to bring it onto the corporate network it is being corrupted as it is being flagged as containing a virus. 
I get all the sources list and total events within each source  
What do you get if you do this? index="SC_POC1" | stats count by sourcetype source
The stats command is counting events not occurrences of status values. You need to use mvexpand to separate out the test cases so you can count the individual status values. | spath suite.case{} out... See more...
The stats command is counting events not occurrences of status values. You need to use mvexpand to separate out the test cases so you can count the individual status values. | spath suite.case{} output=cases | mvexpand cases | spath input=cases status output=Status | spath input=cases name output=case | spath suite.name output=suite | spath MetaData.jobname output=Job_Name | spath MetaData.buildnumber output=Build_Variant | spath MetaData.JENKINS_URL output=Jenkins_Server | stats count(eval(Status="Execution Failed" OR Status="case_Failed")) AS Failed_cases, count(eval(Status="Passed")) AS Passed_cases, count(eval(Status="Failed" OR Status="case_Error")) AS Execution_Failed_cases, dc(case) as Total_cases dc(suite) as "Total suite" by Job_Name Build_Variant Jenkins_Server
Dear team, Is there any recommended way to index .zip from Azure blob Storage via Splunk Add-on for Microsoft Cloud Services? If it is impossible directly, is there any preferred workaround to unzi... See more...
Dear team, Is there any recommended way to index .zip from Azure blob Storage via Splunk Add-on for Microsoft Cloud Services? If it is impossible directly, is there any preferred workaround to unzip it someway?  Big thanks!!!