All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We've logs coming to HEC as nested JSON in chunks; We're trying to break them down into individual events at the HEC level before indexing them in Splunk. I had some success to remove the header/foot... See more...
We've logs coming to HEC as nested JSON in chunks; We're trying to break them down into individual events at the HEC level before indexing them in Splunk. I had some success to remove the header/footer with props.conf and breaking the events, but it doesn't work completely. Most of the logs are not broken into individual events. Sample events -  { "logs": [ { "type": "https", "timestamp": "2025-03-17T23:55:54.626915Z", "elb": "someELB", "client_ip": "10.xx.xx.xx", "client_port": 123456, "target_ip": "10.xx.xx.xx", "target_port": 123456, "request_processing_time": 0, "target_processing_time": 0.003, "response_processing_time": 0, "elb_status_code": 200, "target_status_code": 200, "received_bytes": 69, "sent_bytes": 3222, "request": "GET https://xyz.com", "user_agent": "-", "ssl_cipher": "ECDHE-RSA-AE", "ssl_protocol": "TLSv1.2", "target_group_arn": "arn:aws:elasticloadbalancing:us-west-2:XXXXX:targetgroup/XXXXX", "trace_id": "Root=XXXX" }, { "type": "https", "timestamp": "2025-03-17T23:56:00.285547Z", "elb": "someELB", "client_ip": "10.xx.xx.xx", "client_port": 123456, "target_ip": "10.xx.xx.xx", "target_port": 123456, "request_processing_time": 0, "target_processing_time": 0.003, "response_processing_time": 0, "elb_status_code": 200, "target_status_code": 200, "received_bytes": 69, "sent_bytes": 3222, "request": "GET https://xyz.com", "user_agent": "-", "ssl_cipher": "ECDHE-RSA-AE", "ssl_protocol": "TLSv1.2", "target_group_arn": "arn:aws:elasticloadbalancing:us-west-2:XXXXX:targetgroup/XXXXX", "trace_id": "Root=XXXX" }, { "type": "https", "timestamp": "2025-03-17T23:57:39.574741Z", "elb": "someELB", "client_ip": "10.xx.xx.xx", "client_port": 123456, "target_ip": "10.xx.xx.xx", "target_port": 123456, "request_processing_time": 0, "target_processing_time": 0.003, "response_processing_time": 0, "elb_status_code": 200, "target_status_code": 200, "received_bytes": 69, "sent_bytes": 3222, "request": "GET https://xyz.com", "user_agent": "-", "ssl_cipher": "ECDHE-RSA-AE", "ssl_protocol": "TLSv1.2", "target_group_arn": "arn:aws:elasticloadbalancing:us-west-2:XXXXX:targetgroup/XXXXX", "trace_id": "XXXX" } ] }  I am trying to get   { "type": "https", "timestamp": "2025-03-17T23:55:54.626915Z", "elb": "someELB", "client_ip": "10.xx.xx.xx", "client_port": 123456, "target_ip": "10.xx.xx.xx", "target_port": 123456, "request_processing_time": 0, "target_processing_time": 0.003, "response_processing_time": 0, "elb_status_code": 200, "target_status_code": 200, "received_bytes": 69, "sent_bytes": 3222, "request": "GET https://xyz.com", "user_agent": "-", "ssl_cipher": "ECDHE-RSA-AE", "ssl_protocol": "TLSv1.2", "target_group_arn": "arn:aws:elasticloadbalancing:us-west-2:XXXXX:targetgroup/XXXXX", "trace_id": "Root=XXXX" } { "type": "https", "timestamp": "2025-03-17T23:56:00.285547Z", "elb": "someELB", "client_ip": "10.xx.xx.xx", "client_port": 123456, "target_ip": "10.xx.xx.xx", "target_port": 123456, "request_processing_time": 0, "target_processing_time": 0.003, "response_processing_time": 0, "elb_status_code": 200, "target_status_code": 200, "received_bytes": 69, "sent_bytes": 3222, "request": "GET https://xyz.com", "user_agent": "-", "ssl_cipher": "ECDHE-RSA-AE", "ssl_protocol": "TLSv1.2", "target_group_arn": "arn:aws:elasticloadbalancing:us-west-2:XXXXX:targetgroup/XXXXX", "trace_id": "Root=XXXX" } { "type": "https", "timestamp": "2025-03-17T23:57:39.574741Z", "elb": "someELB", "client_ip": "10.xx.xx.xx", "client_port": 123456, "target_ip": "10.xx.xx.xx", "target_port": 123456, "request_processing_time": 0, "target_processing_time": 0.003, "response_processing_time": 0, "elb_status_code": 200, "target_status_code": 200, "received_bytes": 69, "sent_bytes": 3222, "request": "GET https://xyz.com", "user_agent": "-", "ssl_cipher": "ECDHE-RSA-AE", "ssl_protocol": "TLSv1.2", "target_group_arn": "arn:aws:elasticloadbalancing:us-west-2:XXXXX:targetgroup/XXXXX", "trace_id": "XXXX" }   props.conf [source::http:lblogs] SHOULD_LINEMERGE = false SEDCMD-remove_prefix = s/^\{\s*\"logs\"\:\s+\[//g SEDCMD-remove_suffix = s/\]\}$//g LINE_BREAKER = \}(,\s+)\{ NO_BINARY_CHECK = true TIME_PREFIX = \"timestamp\":\s+\" pulldown_type = true MAX_TIMESTAMP_LOOKAHEAD = 100 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N TRUNCATE = 1000000   Current Result in Splunk are below in the attached screenshot. The header ({ logs [) and footer are removed from events,  but then split (line break) maybe just working for one event in the chunk and others are ignored.  
Hi @DarthHerm  Is this what you are looking for?  From your original search you should be able to do: | eval params=json_array_to_mv(json_extract(_raw,"parameters")) | eval newParams="{}" | ... See more...
Hi @DarthHerm  Is this what you are looking for?  From your original search you should be able to do: | eval params=json_array_to_mv(json_extract(_raw,"parameters")) | eval newParams="{}" | foreach mode=multivalue params [| eval newParams=json_set(newParams,json_extract(<<ITEM>>,"name"),json_extract(<<ITEM>>,"value"))] | spath input=newParams | table accessDate, userName, serverHost, @SparklingTypeId, @PageSize, @PageNumber Below is a full example to get you started: | windbag | head 1 | eval _raw="{\"auditResultSets\":null,\"schema\":\"com\",\"storedProcedureName\":\"SpongeGetBySearchCriteria\",\"commandText\":\"com.SpongeGetBySearchCriteria\",\"Locking\":null,\"commandType\":4,\"parameters\":[{\"name\":\"@RETURN_VALUE\",\"value\":0},{\"name\":\"@SpongeTypeId\",\"value\":null},{\"name\":\"@CustomerNameStartWith\",\"value\":null},{\"name\":\"@IsAssigned\",\"value\":null},{\"name\":\"@IsAssignedToIdIsNULL\",\"value\":false},{\"name\":\"@SpongeStatusIdsCSV\",\"value\":\",1,\"},{\"name\":\"@RequestingValueId\",\"value\":null},{\"name\":\"@RequestingStaffId\",\"value\":null},{\"name\":\"@IsParamOther\",\"value\":false},{\"name\":\"@AssignedToId\",\"value\":null},{\"name\":\"@MALLLocationId\",\"value\":8279},{\"name\":\"@AssignedDateFrom\",\"value\":null},{\"name\":\"@AssignedDateTo\",\"value\":null},{\"name\":\"@RequestDateFrom\",\"value\":null},{\"name\":\"@RequestDateTo\",\"value\":null},{\"name\":\"@DueDateFrom\",\"value\":null},{\"name\":\"@DueDateTo\",\"value\":null},{\"name\":\"@ExcludeCustomerFlagTypeIdsCSV\",\"value\":\",1,\"},{\"name\":\"@PageSize\",\"value\":25},{\"name\":\"@PageNumber\",\"value\":1},{\"name\":\"@SortColumnName\",\"value\":\"RequestDate\"},{\"name\":\"@SortDirection\",\"value\":\"DESC\"},{\"name\":\"@HasAnySparkling\",\"value\":null},{\"name\":\"@SparklingTypeId\",\"value\":null},{\"name\":\"@SparklingSubTypeId\",\"value\":null},{\"name\":\"@SparklingStatusId\",\"value\":null},{\"name\":\"@SparklingDateFrom\",\"value\":null},{\"name\":\"@SparklingDateTo\",\"value\":null},{\"name\":\"@SupervisorId\",\"value\":null},{\"name\":\"@Debug\",\"value\":null}],\"serverIPAddress\":\"255.255.000.000\",\"serverHost\":\"WEBSERVER\",\"clientIPAddress\":\"255.255.255.255\",\"sourceSystem\":\"WebSite\",\"module\":\"Vendor.Product.BLL.Community\",\"accessDate\":\"2025-04-30T15:34:33.3568918-06:00\",\"userId\":3231,\"userName\":\"PeterVenkman\",\"traceInformation\":[{\"type\":\"Page\",\"class\":\"Vendor.Product.Web.UI.Website.Community.Operations.SpongeSearch\",\"method\":\"Page_Load\"},{\"type\":\"Manager\",\"class\":\"Vendor.Product.BLL.Community.SpongeManager\",\"method\":\"SpongeSearch\"}]}" | fields _raw | spath | eval params=json_array_to_mv(json_extract(_raw,"parameters")) | eval newParams="{}" | foreach mode=multivalue params [| eval newParams=json_set(newParams,json_extract(<<ITEM>>,"name"),json_extract(<<ITEM>>,"value"))] | spath input=newParams | table accessDate, userName, serverHost, @SparklingTypeId, @PageSize, @PageNumber  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thank you both. This was really helpful.
TLS can be sensitive to errors and it's relatively easy to break your environment so I'd advise you to do some lab work (not necessarily with Splunk; apache httpd for example is a very well documment... See more...
TLS can be sensitive to errors and it's relatively easy to break your environment so I'd advise you to do some lab work (not necessarily with Splunk; apache httpd for example is a very well docummented solution you can use to get experience with real-world third-party CA - you can use Let's Encrypt for free). But back to your question - as a rule of thumb, the side which identity you need to verify must use a certificate trusted by the other side. So typically - like with most of the HTTPS servers over the internet - the server presents a certificate issued by a CA that the client trusts. That's the basic minimal setup which allows the client to be sure that the server is who it claims to be and then allows it to negotiate a protected encrypted connection within which it can, for example, authenticate itself using normal HTTP means. This is the way it works normally between your browser and most internet services - your browser has a list of trusted RootCAs. The server you're trying to connect to presents a certificate with certificate chain tracing back to a RootCA your browser trusts. The browser checks if the certificate is valid, matches the server you wanted to connect to and the purpose for which the certificate was issued (so you can't just set up a www server with a certificate meant for S/MIME mail encryption). Your browser negotiates an encrypted connection with the server and within this connection, since you can now trust that the server is who it claims to be, you authenticate yourself using login/password or whatever other method you use because you know that you're talking to a known party over a secure channel. That's a typical use case and it matches the typical use case with UF as a client (both as a client for DS as well as a client connecting to the indexer/HF). You need to have a certificate issued by a trusted CA (or having a certification chain back to trusted CA) on the server (DS, idx and so on) and have the UF trust the CA certificate. In this case all UFs just need to have this one CA certificate configured as a trusted CA. But you can also if needed configure client's authentication. In such case each UF would have its own certificate which in this case would need to be issued by a CA trusted by the server. In this scenario, typically referred to as mTLS or mutual authentication, each side verifies the identity of the other endpoint so each unique subject (in our case each UF) would require a separate individual cert. This scenario is however difficult to manage without a lot of administrative overhead (just tracking expiration dates of dozens of certs is enough of a headache, not mentioning renewing them and installing) so it's not commonly used. But it's possible.
Ok, so just to summarize to make sure I understand, I would create a custom cert (either self-signed or third-party) and configure the DS to use that cert per the links I had above. Then I would conf... See more...
Ok, so just to summarize to make sure I understand, I would create a custom cert (either self-signed or third-party) and configure the DS to use that cert per the links I had above. Then I would configure the UFs to trust that cert's CA in their deploymentclient.conf file.  If I understand correctly, I'm only creating the one custom cert for the deployment server itself. The UFs don't need their own cert, they only need to trust the CA for that cert I created for the DS. Is that correct?  (Sorry for the pedantry, I'm just unsure if I fully understand the cert process in general and don't want to assume I understand then break my environment)
First tag your event with one of two classes | eval type=if(isnull(contactid),"lw_message","standardised_message") And then do | xyseries id type time
I have data like this        id  time  Conatcts x1 4/22/2011 10:00 676689 x1 4/23/2011 11:00       I want it like as shown below : Lw_mesage time is time when conatctid col... See more...
I have data like this        id  time  Conatcts x1 4/22/2011 10:00 676689 x1 4/23/2011 11:00       I want it like as shown below : Lw_mesage time is time when conatctid column is null  and other when conatcid column has value    id  lw_message_time  standardised  message  x1 4/23/2011 10:00 4/23/2011 11:00
Hi @StephenD1  You will need to ensure that the UFs trust the certificate that the DS uses on port 8089.  You can specify a custom CA to be used for the UFs in the deploymentclient.conf under the c... See more...
Hi @StephenD1  You will need to ensure that the UFs trust the certificate that the DS uses on port 8089.  You can specify a custom CA to be used for the UFs in the deploymentclient.conf under the caCertFile key in the [deployment-client] stanza. By default this uses the CA specified in server.conf but can be overridden for your DS. For more info check the docs at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Deploymentclientconf#:~:text=normally.%0A*%20Default%3A%20false-,caCertFile,-%3D%20%3Cpath%3E%0A*%20Specifies%20a  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi,  Your response was the key to make the idea I had happened. I have to made some changes to the query. Since I have  a long file list, I decided to list them with "| rest" command, then I wa... See more...
Hi,  Your response was the key to make the idea I had happened. I have to made some changes to the query. Since I have  a long file list, I decided to list them with "| rest" command, then I was getting wildcard issues and I had to make macros to overcome that problem.   Now I working with the Splunk admin team because I am getting the error below casuse by "| sendemail" and it is caused by a missing admin access: [map]: command="sendemail", 'rootCAPath' while sending mail to: jpichardo@jaggaer.com I cannot use "| sendresults" command because the version we have does not support it. | rest /servicesNS/-/-/data/lookup-table-files f=title splunk_server=local ```To avoid the "you do not have the "dispatch_rest_to_indexers" capability" warning``` | fields title | search title="lk_file*.csv" | dedup title | map maxsearches=9999 search="inputlookup $title$ |eval filename=$title$ | search path!=`macroDoubleQuotation` | stats values(duration_time) AS duration_time by path filename | `macroMakemvNewLineDelimeter` duration_time | eval duration_time=`macroSplitSpace` | `macroPerformanceP90` | sort path | `macroSendMailPerformanceSlaList` Thanks so much for help me with!!!. Regards,
Ugh. Unfortunately you have your data in this highly inconvenient form of fielname=something,fieldvalue=something fromwhich you should deduce something=something. Yes, you can do spath and actually f... See more...
Ugh. Unfortunately you have your data in this highly inconvenient form of fielname=something,fieldvalue=something fromwhich you should deduce something=something. Yes, you can do spath and actually foreach might prove to be better than mvexpand but it won't be pretty. The main problem with this data format is that in order to do anything reasonable with it (including initial filtering) is to process it and transform to something completely different. If your dataset size isn't that big and you're not gonna filter the events anyway, you can get by with it. But if you wanted to select just one user... you'd still need to dig through all your events. That's not a very efficient way to do so. So while I usually say that as a rule of thumb do not fiddle with raw regexes over structured data in this case if you are absolutely sure that the format is always like this ( {a:b,c:d} -> b=d ) you could hazard doing a regex-based extraction as long as you're aware of the risks. Alternatively you could use summary indexing to just once per event transform it to the desired format with properly rendered key=value pairs and then search from the summary index.
I'm trying to replace the default SSL certs on the deployment server with third-party certs but I'm confused about what it entails. I don't know much about TLS certs, mostly just the basics. I'm foll... See more...
I'm trying to replace the default SSL certs on the deployment server with third-party certs but I'm confused about what it entails. I don't know much about TLS certs, mostly just the basics. I'm following these documents for the deployment server-to-client: https://docs.splunk.com/Documentation/Splunk/latest/Security/StepstoSecuringSplunkwithTLS https://docs.splunk.com/Documentation/Splunk/latest/Security/ConfigTLScertsS2S.  If I make the changes on the deployment server to point to the third-party cert, do I also need to change the cert on the UFs to keep communicating on port 8089?  
I'm continuing to work on dashboards to report on user activity on our application. Going through the knowledgebase, bootcamp slides, and google, trying to determine the best route to report on the v... See more...
I'm continuing to work on dashboards to report on user activity on our application. Going through the knowledgebase, bootcamp slides, and google, trying to determine the best route to report on the values in logs files such as this one.  The dashboards I am creating is showing activity in the various modules, what values are getting select and what is being pulled up. I looked at spath and mvexpand and wasn't getting the results I was hoping for, might have been I wasn't formatting the search correctly and also how green myself and work is to Splunk. Creating field extractions has worked for the most part to pull the specific values I wanted to report but further on, I'm finding incorrect values being pulled in. Below is one such event that's been sanitized and it's in valid JSON format.  I'm trying to do a table event showing the userName, date and time, serverHost, SparklingTypeId, PageSize, and PageNumber. The other values not so much.  Is spath and MV expand along with eval statements the best course? I was using field extractions in a couple other modules but then found incorrect values were being added.  {"auditResultSets":null,"schema":"com","storedProcedureName":"SpongeGetBySearchCriteria","commandText":"com.SpongeGetBySearchCriteria","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@SpongeTypeId","value":null},{"name":"@CustomerNameStartWith","value":null},{"name":"@IsAssigned","value":null},{"name":"@IsAssignedToIdIsNULL","value":false},{"name":"@SpongeStatusIdsCSV","value":",1,"},{"name":"@RequestingValueId","value":null},{"name":"@RequestingStaffId","value":null},{"name":"@IsParamOther","value":false},{"name":"@AssignedToId","value":null},{"name":"@MALLLocationId","value":8279},{"name":"@AssignedDateFrom","value":null},{"name":"@AssignedDateTo","value":null},{"name":"@RequestDateFrom","value":null},{"name":"@RequestDateTo","value":null},{"name":"@DueDateFrom","value":null},{"name":"@DueDateTo","value":null},{"name":"@ExcludeCustomerFlagTypeIdsCSV","value":",1,"},{"name":"@PageSize","value":25},{"name":"@PageNumber","value":1},{"name":"@SortColumnName","value":"RequestDate"},{"name":"@SortDirection","value":"DESC"},{"name":"@HasAnySparkling","value":null},{"name":"@SparklingTypeId","value":null},{"name":"@SparklingSubTypeId","value":null},{"name":"@SparklingStatusId","value":null},{"name":"@SparklingDateFrom","value":null},{"name":"@SparklingDateTo","value":null},{"name":"@SupervisorId","value":null},{"name":"@Debug","value":null}],"serverIPAddress":"255.255.000.000","serverHost":"WEBSERVER","clientIPAddress":"255.255.255.255","sourceSystem":"WebSite","module":"Vendor.Product.BLL.Community","accessDate":"2025-04-30T15:34:33.3568918-06:00","userId":3231,"userName":"PeterVenkman","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.Community.Operations.SpongeSearch","method":"Page_Load"},{"type":"Manager","class":"Vendor.Product.BLL.Community.SpongeManager","method":"SpongeSearch"}]}  
limit=N is the same as limit=topN And the bottomN appeared in 8.1, which was several years ago
this was perfect, thank you!
Scratch that first line.. Use this index="ifi" appEnvrnNam="ANY" msgTxt="Standardizess SUCCEEDED - FROM:*"
  index="ifi" appEnvrnNam="ANY" msgTxt="StandardizedAddress SUCCEEDED*" | eval _raw="Standardizedss SUCCEEDED - FROM: {\"Standardizedss \":\"SUCCEEDED\",\"FROM\":{\"Address1\":\"123 NAANNA SAND RD\... See more...
  index="ifi" appEnvrnNam="ANY" msgTxt="StandardizedAddress SUCCEEDED*" | eval _raw="Standardizedss SUCCEEDED - FROM: {\"Standardizedss \":\"SUCCEEDED\",\"FROM\":{\"Address1\":\"123 NAANNA SAND RD\",\"Address2\":\"\",\"City\":\"GREEN\",\"County\":null,\"State\":\"WY\",\"ZipCode\":\"44444-9360\",\"Latitude\":null,\"Longitude\":null,\"IsStandardized\":true,\"AddressStandardization\":1,\"AddressStandardizationType\":0},\"RESULT\":1,\"AddressDetails\":[{\"AssociatedName\":\"\",\"HouseNumber\":\"123\",\"Predirection\":\"\",\"StreetName\":\"NAANNA SAND RD\",\"Suffix\":\"RD\",\"Postdirection\":\"\",\"SuiteName\":\"\",\"SuiteRange\":\"\",\"City\":\"GREEN\",\"CityAbbreviation\":\"GREEN\",\"State\":\"WY\",\"ZipCode\":\"44444\",\"Zip4\":\"9360\",\"County\":\"Warren\",\"CountyFips\":\"27\",\"CoastalCounty\":0,\"Latitude\":77.0999,\"Longitude\":-99.999,\"Fulladdress1\":\"123 NAANNA SAND RD\",\"Fulladdress2\":\"\",\"HighRiseDefault\":false}],\"WarningMessages\":[\"This mail requires a number or Apartment number.\"],\"ErrorMessages\":[],\"GeoErrorMessages\":[],\"Succeeded\":true,\"ErrorMessage\":null}" | rex "StandardizedAddres SUCCEEDED - FROM: (?<event>.*)" | spath input=event | rename AddressDetails{}.* as *, WarningMessages{} as WarningMessages | table Latitude Longitude WarningMessages
Hi Matt, Thanks for the advice.  I figured the solution out, see my reply. Are you aware of a way to pass this onto the relevant team? Thanks, Stanley
I have figured out the solution to the issue. The problem lies in the Splunk Security Essentials Saved search `Generate MITRE Data Source Lookup`. Specifically the line `| sort 0 ds_id`   Sol... See more...
I have figured out the solution to the issue. The problem lies in the Splunk Security Essentials Saved search `Generate MITRE Data Source Lookup`. Specifically the line `| sort 0 ds_id`   Solution Update the line ``` | sort 0 ds_id ``` To   ``` | sort 0 ds_id external_id ```   Hopefully this fixed search can be implemented in future app releases.  
I know this is a pretty old post, but wanted to put this here for anyone else looking.  This has bothered me for some time.   It seems timechart, as of some version, supports 3 limit options: limit... See more...
I know this is a pretty old post, but wanted to put this here for anyone else looking.  This has bothered me for some time.   It seems timechart, as of some version, supports 3 limit options: limit=N limit=topN limit=bottomN https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Timechart
First things first. 1. You posted this in Splunk SOAR section of Answers but the question seems to be about Splunk Enterprise. I'll move this thread to appropriate section but please try to be caref... See more...
First things first. 1. You posted this in Splunk SOAR section of Answers but the question seems to be about Splunk Enterprise. I'll move this thread to appropriate section but please try to be careful about where you post - the sections are there so that we keep the forums tidy and make it easier to find answers to your problems. 2. We have no idea if you have a standalone installation or clustered indexers. A cluster involves way more work to do what you're asking about and needs special care not to break it. 3. While moving indexes around is possible after their initial creation, it's a risky operation if not done properly and therefore I'd advise against attempting it by an inexperienced admin. You have been warned. One more very important thing - what do you mean by "external storage"? If you plan on moving some of your indexes onto some CIFS or NFS (depending on the system your Splunk runs on) share, forget it. This type of storage can be used for storing frozen buckets but not for searchable data. And now the second warning. You have been warned twice. Since Splunk's indexes are "just" directories on a disk there are two approaches to task of moving the data around. Option one - stop your Splunk, move the hot/warm and/or cold directories to another directory and adjust the index definition in indexes.conf accordingly, start Splunk Option two - stop your Splunk, move the hot/warm and/or cold directories to another directory and make the OS see the new location under the old location (using bind mount, symlink, junction - depending on the underlying OS), start Splunk. In case of clustered indexers you must go with the second option, at least until you've moved data in all indexers, because all indexers must share the same config so you can't reconfigure just some of the indexers. Option two is the only way to go if you wanted to move just some part of your buckets (like oldest half of cold buckets) but this is something even I wouldn't try to do in production. You have been warned thrice! Having said that - it would probably be way easier to attach an external storage as frozen storage and make Splunk rotate the older buckets there. Of course frozen buckets are not searchable so from Splunk's point of view they are effectively deleted. (And handling frozen buckets in a cluster can be tricky if you want to avoid duplicated frozen buckets). Another option (but that completely messes with your overall architecture) would be to use a remote S3-compatible storage and define smartstore-backed indexes but it is a huge ovehrhaul of your whole setup and while in some cases it can help, in others it can cause additional problems so YMMV.