All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Index 버전을 8.0.7로 업그레이드하려고합니다. 그러나 현재 HP OS Forwarder가 설치되어 있습니다. 버전 8을 지원하는 HP OS Forwarder를 구할 수 없습니다. 방법을 알고 있습니까?
Hi, I am preparing a single instance deployment for a small size client, they have some Windows 2012 clients. In my lab I have Splunk 8.0.5 and SAI 2.1.0 build 20. I added a Win2012 VM for test us... See more...
Hi, I am preparing a single instance deployment for a small size client, they have some Windows 2012 clients. In my lab I have Splunk 8.0.5 and SAI 2.1.0 build 20. I added a Win2012 VM for test using the powershell script. Then I shut it down and after rolling back the snapshot, I changed hostname and IP, and I installed the UF again using the script. Now, after 30 min, while the VMs are off (even the images are removed), still in SAI it shows they are active, and anytime I refresh, the "Last time data collected" is being updated to Now! I expected it shows me the Entities are Inactive! Also I tried to delete the nodes from Action, after deletion, they appear when I refresh again! Please advise. Thank you.
Hi I have below sample data   |makeresults|eval a="1" |append[|makeresults|eval a="2"]|append[|makeresults|eval a="3"]|append[|makeresults|eval a="4"]|append[|makeresults|eval a="0"]|append[|makere... See more...
Hi I have below sample data   |makeresults|eval a="1" |append[|makeresults|eval a="2"]|append[|makeresults|eval a="3"]|append[|makeresults|eval a="4"]|append[|makeresults|eval a="0"]|append[|makeresults|eval a="2"]|append[|makeresults|eval a="4"]|append[|makeresults|eval a="6"]|append[|makeresults|eval a="8"]   Here `a` field values increases and again get to zero and again increases. I want to find max value before getting it to zero also if zero not occurs at last then last value as well .Then I need to sum it up. so for above sample data I want value 4 and value 8 should be retrieved so that I can sum them to get 4+8=12 as final result. Thanks,  
So I have seen questions that are similar but I can't seem to find anything that helps with my specific question. I have the following search:  index=x sourcetype="y" | stats avg(ResponseTime) as "... See more...
So I have seen questions that are similar but I can't seem to find anything that helps with my specific question. I have the following search:  index=x sourcetype="y" | stats avg(ResponseTime) as "Average Response Time" by Consumer | eval "Average Response Time"=round('Average Response Time',0) | replace "comp_a" with "Company A" "comp_b" with "Company B" "comp_c" with "Company C" This returns the below column chart: The only issue I am having from this point, is that I cannot figure out how to change the color of individual columns (e.g. make the Company A column red, Company B column blue, Company C column green) since they are all displaying that avg(ResponseTime) value. I am currently using this  "<option name="charting.seriesColors">[0x006D9C]</option>" within the chart source to make it this shade of blue. But even if I add commas with other colors within the seriesColors brackets, it does not work. Any suggestions?
Hi all, I'm looking to start implementing our Splunk configuration in Terraform and I would like to be able to manage our deployment apps that way as well. Problem being that I can't find any way t... See more...
Hi all, I'm looking to start implementing our Splunk configuration in Terraform and I would like to be able to manage our deployment apps that way as well. Problem being that I can't find any way to create or modify deployment apps via the REST API - which means it wouldn't be doable with Terraform (well, at least, not as cleanly.) Just wanted to confirm that I understood the docs correctly and that the only way to manage deployment apps is through directly editing the `deployment-apps` folder?
Hello Team, I have my service now ticketing logs enabled into my splunk. I do required  a below help and suggestions. Look at the ticketing status goes like "draft" "Recover" "Cancelled" "Analysis"... See more...
Hello Team, I have my service now ticketing logs enabled into my splunk. I do required  a below help and suggestions. Look at the ticketing status goes like "draft" "Recover" "Cancelled" "Analysis"  "Closed"  Suppose ticket #2345 starts with draft then goes on recover then to analysis then to closed state Here we go to pull current status of the ticket #2345 in splunk search by below SPL query for last 24hours. index=main source=xyz dv_state=* dv_opened_by=pox OR dv_opened_by=IOP |dedup number dv_state |TABLE number dv_state,dv_opened by, dv_opened_at Search answer: I am getting ticket status 2345 as "draft". But actually ticket is in closed state. Am looking  #2345 ticket should show only in closed state. let me know what went wrong. My expectation : If the #2345 ticket went through closed state what will be the spl query for last 24hours.?  
hi experts, I'm new to Splunk and have existing Splunk (syslog). I can get all the data into it but wanted to generate a report showing the Users successful authentication.  We got 2 SSID via ISE (en... See more...
hi experts, I'm new to Splunk and have existing Splunk (syslog). I can get all the data into it but wanted to generate a report showing the Users successful authentication.  We got 2 SSID via ISE (enterprise deployment where we got node in DMZ and intranet) - 1) Guest - open/web redirection 2) Staff - with WPA2/Auth(802.1X). I played around with it and somehow I'm getting the correct report I need for Guest but the issue is I'm only getting only failed auth in Staff. Does anyone there who have done this before? any tips/ideas? TIA worked for Guest: index=network sourcetype=cisco:ise:syslog Authentication succeeded* |table EndPointMACAddress,UserName,Address,ISEPolicySetName |dedup EndPointMACAddress   Not working for Staff: index=network sourcetype=cisco:ise:syslog Authentication succeeded* |table EndPointMACAddress,UserName,Address,ISEPolicySetName |dedup EndPointMACAddress
2020-12-17T01:21:44.690341+00:00 txn1.test-fdb-us-south-004 2020-12-17T01:21:44Z { "Severity": "10", "Time": "1608168104.425364", "Type": "MachineMetrics", "ID": "0000000000000000", "Elapsed": "5.000... See more...
2020-12-17T01:21:44.690341+00:00 txn1.test-fdb-us-south-004 2020-12-17T01:21:44Z { "Severity": "10", "Time": "1608168104.425364", "Type": "MachineMetrics", "ID": "0000000000000000", "Elapsed": "5.00001", "MbpsSent": "2.59981", "MbpsReceived": "2.3487", "OutSegs": "12262", "RetransSegs": "0", "CPUSeconds": "0.111557", "TotalMemory": "67510792192", "CommittedMemory": "4303114240", "AvailableMemory": "63207677952", "ZoneID": "txn1", "MachineID": "txn1", "Machine": "10.95.111.226:4503", "LogGroup": "default", "Roles": "RV", "TrackLatestType": "Original" }     I came  up with : index=fdb sourcetype=* |eval (rex "^s(?<severity>[.]*)y$") as sev | stats  count(eval(sev “40”)) as ERROR count(eval(sev “20”)) as WARN count(eval(sev “10")) as INFO  by sourcetype       it doest work...     I want to. create a table or time chart. to  list all he severity according to sourcetype.  
Hello, This is nasha430. Thanks to your help, I've been running this program very gratefully. I'm looking over the manual translated in Korean language and I found that the report linked in the dash... See more...
Hello, This is nasha430. Thanks to your help, I've been running this program very gratefully. I'm looking over the manual translated in Korean language and I found that the report linked in the dashboard cannot commence sharing timepicker. Is the report linked in the dashboard not updated if we cannot use the time picker shared? If not, how can we keep the report up-to-date? Thanks for the cooperation. Have a nice day.
Below are my log entry  DateTime=2020-12-16 14:19:01:888 UTC, Type=Orchestrator Event Log, Environment=prod, Thread=[Processor-ENSDelivery-PRODOCSNotification001-5], Logger=com.expedia.www.orchestra... See more...
Below are my log entry  DateTime=2020-12-16 14:19:01:888 UTC, Type=Orchestrator Event Log, Environment=prod, Thread=[Processor-ENSDelivery-PRODOCSNotification001-5], Logger=com.expedia.www.orchestrator.service.OrchestratorProcessor Message=[Email_TransactionStatus=SUCCESS, OrchestrationStatus=WWW_Template, FallBackStatus=MODULE_BUILDER_FAILED, FallBackReason=LxVirtualCurrencyRefundAmount and LXCreditCardRefundAmount both are null or empty] This is my splunk query  index=app splunk_server_group="bexg*" sourcetype=orchestrator-service* NOT "url=/isActive" NOT "Logger=com.netflix.servo.publish.JmxMetricPoller" Email_TransactionStatus FallBackStatus=MODULE_BUILDER_FAILED | stats count by FallBackStatus, FallBackReason It shows like below FallBackStatus                                           FallBackReason                                                       Count MODULE_BUILDER_FAILED                 LxVirtualCurrencyRefundAmount                   1   My expected output FallBackStatus                                           FallBackReason                                                       Count MODULE_BUILDER_FAILED               LxVirtualCurrencyRefundAmount and LXCreditCardRefundAmount both are null or empty                   1   It seems the mentioned case FallBackReason filed value getting cropped.
I'm using the "LogPush" feature from Cloudflare to get "log events" put into a Splunk index. The log events are all JSON, but apparently many records are sent in a single "push". My problem is, if I ... See more...
I'm using the "LogPush" feature from Cloudflare to get "log events" put into a Splunk index. The log events are all JSON, but apparently many records are sent in a single "push". My problem is, if I search for a "field=value" it will only match if the "field" is in the first record. If I search with just the "value" the record is found, but I need Splunk to recognize the fields. I attached a screenshot of the search results when I just searched for "60286da6ca69eb29". I want to find that "record" via "RayID=60286da6ca69eb29". And really, I just want that record, that starts with the RayID field, and ends with the "WorkerSubrequest". If I search for "RayID=60286dc754aac4e8" (the first record) it matches the way I would expect. So, the question is: Is there a way to get just that portion of the log entry (i.e. just the portion that has "RayID=60286da6ca69eb29), or is that impossible because of the way the records are put into Splunk?
I know the documentation for Zeek add-on says there is support for specific versions  Zeek aka Bro versions 2.1, 2.2, 2.3, 2.4, 2.5 But has anyone used it with Zeek version 3.+???  OR does anyone ... See more...
I know the documentation for Zeek add-on says there is support for specific versions  Zeek aka Bro versions 2.1, 2.2, 2.3, 2.4, 2.5 But has anyone used it with Zeek version 3.+???  OR does anyone have a suggestion to onboard Zeek 3+ ???   Is sending as json format the best option? TY!
I'm struggling with parsing this JSON. This query shows the part of a larger JSON element (response.rules).   | makeresults | eval _raw="{\"CLEANSE_001\":{\"start\":1608151620347,\"nodes\":[{\"star... See more...
I'm struggling with parsing this JSON. This query shows the part of a larger JSON element (response.rules).   | makeresults | eval _raw="{\"CLEANSE_001\":{\"start\":1608151620347,\"nodes\":[{\"start\":1608151620347,\"name\":\"Cleanse In\",\"end\":1608151620576}],\"end\":1608151620576},\"ENRICH_001\":{\"start\":1608151620376,\"nodes\":[{\"start\":1608151620376,\"name\":\"Resolve Providers\",\"end\":1608151620378,\"status\":{\"errorCode\":0}}]},\"ROUTER_001\":{\"start\":1608151620408,\"nodes\":[{\"nodeId\":\"48290e0c.13b5\",\"startTime\":1608151620408,\"endTime\":1608151620564,\"action\":0,\"status\":null,\"name\":\"Modality Router\",\"type\":\"router\"}]},\"COMMON_001\":{\"start\":1608151620428,\"nodes\":[{\"nodeId\":\"471341df.4d01e\",\"startTime\":1608151620428,\"endTime\":1608151620431,\"action\":1,\"status\":null,\"name\":\"Check Blacklist\",\"type\":\"bank-accounts\"},{\"nodeId\":\"c53c6260.e6a06\",\"startTime\":1608151620434,\"endTime\":1608151620436,\"action\":0,\"status\":null,\"name\":\"Complete Processing\",\"type\":\"finalise\"}],\"end\":1608151620436},\"RULE_001\":{\"start\":1608151620467,\"nodes\":[{\"nodeId\":\"1f79b359.f41a6d\",\"startTime\":1608151620467,\"endTime\":1608151620471,\"action\":1,\"status\":null,\"name\":\"Is Item Whitelisted\",\"type\":\"item-codes\"},{\"nodeId\":\"5f220c75.7d7314\",\"startTime\":1608151620474,\"endTime\":1608151620514,\"action\":0,\"status\":{\"message\":\"History match found\",\"criteria\":{\"fund\":\"AAA\",\"policyId\":\"12345678X\"}},\"name\":\"Does History Exist\",\"type\":\"history-filter\"},{\"nodeId\":\"b362caa7.30b868\",\"startTime\":1608151620517,\"endTime\":1608151620519,\"action\":0,\"status\":null,\"type\":\"audit\"},{\"nodeId\":\"f23d5572.add938\",\"startTime\":1608151620529,\"endTime\":1608151620535,\"action\":0,\"status\":null,\"name\":\"Conditional Item Approval\",\"type\":\"conditional\"},{\"nodeId\":\"5231d7f2.4d6be8\",\"startTime\":1608151620538,\"endTime\":1608151620541,\"action\":0,\"status\":null,\"name\":\"Finalise Rule\",\"type\":\"finalise\"}],\"end\":1608151620541},\"RULE_003\":{\"start\":1608151620548,\"nodes\":[{\"nodeId\":\"d23183a9.2ac66\",\"startTime\":1608151620548,\"endTime\":1608151620551,\"action\":1,\"status\":null,\"name\":\"MyName\",\"type\":\"items\"},{\"nodeId\":\"684920f.3e1fee\",\"startTime\":1608151620553,\"endTime\":1608151620554,\"action\":0,\"status\":null,\"name\":\"Finalise Rule\",\"type\":\"finalise\"}],\"end\":1608151620554}}" | rex field=_raw max_match=0 "(?<rule>[A-Z]*_\d+)"   Note the JSON is what is shown in the query result, not the escaped JSON above. So, what I want is to be able to mvexpand the 6 'rules' shown by the rex extraction, i.e.  CLEANSE_001 ENRICH_001 ROUTER_001 COMMON_001 RULE_001 RULE_003   so that there is a field called rule with that name in and then the JSON for that particular rule which I can then spath out to its component parts to do stuff with the data.   As this is a JSON representation of a map type of object, where the keys are the rule name, I don't know if it's possible to extract this with spath. The data is this sort of representation, where there can be any number of properties inside the rule and the nodes array objects can contain those shown plus other elements.        { "rule1": { start:1 end:55 nodes: [ { name:A start:1 end:17 }, { name:B start:17 end:42 } ] }, "rule2": { start:42 end:55 nodes: [ { name:C start:42 end:55 } ] } }       I don't want this to be split up into several rows at index time, this is simply a search to query a single event to show processing of a particular entity through each rule. Is it possible to spath the data in a way to get each rule into its own event or will it have to be rex'ed out using some clever regex? I can spath _raw on this data, but then I get all the extracted fields with the rule 'name' as the top level of the field name.  what I want is 6 events with rule_name=CLEANSE_001 etc + rule_data=JSON object that I can spath 
I'm trying to get better visibility of our PowerShell activity in one of my boxes (cola182) so I enabled process Auditing (EventCode 4688) - Which is working perfectly fine. However, when I attempte... See more...
I'm trying to get better visibility of our PowerShell activity in one of my boxes (cola182) so I enabled process Auditing (EventCode 4688) - Which is working perfectly fine. However, when I attempted to enable Module Logging (4103)  and Script Block Logging (4104) it doesn't seem like I am receiving these logs. I went to Policy Editor > Computer Configuration > Windows Components > Powershell logging and made sure that the following were enabled (literally the 3 of them are showing as enabled): Turn on Module Logging Turn on PowerShell Script Block Logging Turn on PowerShell transcription. I ran a crappy little test.ps1 script in cola182 in hopes that this activity would be reflected in my splunk logs: $alert = { "I like chicken salad sandwiches" } & $alert & $alert When I check splunk, I am able to see this activity,  but it doesn't come up under 4103   LogName=Windows PowerShell SourceName=PowerShell EventCode=800 EventType=4 Type=Information ComputerName=Cola182  TaskCategory=Pipeline Execution Details OpCode=Info RecordNumber=6578 Keywords=Classic Message=Pipeline execution details for command line: . ParameterBinding(Out-Default): name="InputObject"; value="I like chicken salad sandwiches"   As simple as my initial script is, technically it's a script block. Howcome I'm not able to see this activity? What am I missing? Thanks!  
This question: How to use IN function with VALUE-LIST as a search or lookup  discusses using IN for a single key and list of values. Can that approach be generalized for lists of  KV lists?  Want to... See more...
This question: How to use IN function with VALUE-LIST as a search or lookup  discusses using IN for a single key and list of values. Can that approach be generalized for lists of  KV lists?  Want to abstract what could be done in a verbose way with  AND and OR's : (keyA=value1 AND keyB=value2) OR (keyA=value3 AND keyB=value4) OR (keyA=value5 AND keyB=value6)...          
Hello I'm having an issue where We've installed the Sonicwall TA and Splunk Security Essentials and/or Infosec The Sonicwall App works fine for basic information but we're trying to delve deeper int... See more...
Hello I'm having an issue where We've installed the Sonicwall TA and Splunk Security Essentials and/or Infosec The Sonicwall App works fine for basic information but we're trying to delve deeper into the data and was hoping that Sonicwall with CIM Data would be easily integrated but the data does not seem to come in correctly on Infosec and I cannot import in to Security Essentials at all. We do need these features for compliance reasons and to be able to monitor the data more closely please let me know if theres any more data that is required.
Hello, Our infrastructure is currently hosted on Oracle Government Cloud and we are trying to determine a way to get the OCI Audit Logs sent to our Splunk instance. Given that OCI GovCloud has a lim... See more...
Hello, Our infrastructure is currently hosted on Oracle Government Cloud and we are trying to determine a way to get the OCI Audit Logs sent to our Splunk instance. Given that OCI GovCloud has a limited number of services, we can not leverage the Service Connector Hub to send the OCI Audit Logs to an OCI Streaming Service or to Object Storage. Both Streaming Service and Object Storage would have been quick wins for us because Splunk has add-ons (https://splunkbase.splunk.com/app/4616/, https://splunkbase.splunk.com/app/5222/) with built-in Audit Log sourcetyping that can facilitate the ingestion of the OCI Audit Logs from these locations. The other option is to leverage our Splunk HF to leverage the REST API input to directly query the OCI Audit Log Service to get the logs. There is a TA (https://splunkbase.splunk.com/app/1546/) that can assist with this, so I believe this is probably the best approach. Does anyone have any other suggestions on how we can proceed with this? The goal is to get the OCI Audit logs sent to Splunk. The limitation is that we're using OCI GovCloud (US), which does not provide a key service that could have simplified our approach to routing the audit logs to Splunk.
I'm configuring one computer as a forwarder and another as the receiver. How do I find the IP address that the forwarding computer needs to send to? In other words, how do I find the IP address of th... See more...
I'm configuring one computer as a forwarder and another as the receiver. How do I find the IP address that the forwarding computer needs to send to? In other words, how do I find the IP address of the Search head on the receiving computer?
Hola ! Mi agente de Appdynamics .NET no se registra en el controlador Saas. El mensaje de error es este: 2020-12-11 17: 29: 30.6402 4204 w3wp 2 5 Información RootNodeHelper Clase de nodo raíz... See more...
Hola ! Mi agente de Appdynamics .NET no se registra en el controlador Saas. El mensaje de error es este: 2020-12-11 17: 29: 30.6402 4204 w3wp 2 5 Información RootNodeHelper Clase de nodo raíz predeterminada establecida en AppDynamics.Agent.MSCORLIb.MethodExecutionEnvironment 2020-12-11 17: 29: 31.1246 4204 w3wp 2 7 Información AnnotationPropertyListenerManager Node-Property registrado [t disable-header-details-enabled] al método [Void setEnabledHeaderWithDetails (Boolean)] en la clase com.appdynamics.ee.agent.appagent.services.transactionmonitor.common.exitcall.DisablingHeaderGenerator 1 2020-12-11 17: 29: 32.3435 10076 AppDynamics.Coordinator 1 6 Info RegistrationChannel usando el host del controlador [coopenae-prod.saas.appdynamics.com]; puerto del controlador [80] 2020-12-11 17: 29: 32.3435 10076 AppDynamics.Coordinator 1 6 Info RegistrationChannel configuración información de host única Nombre de host [CURACAO] 2020-12-11 17: 29: 32.3435 10076 AppDynamics.Coordinator 1 6 Info RegistrationChannel setting agent version [20.11.0 compatible with 4.4.1.0] 2020-12-11 17: 29: 32.3435 10076 AppDynamics.Coordinator 1 6 Info RegistrationChannel setting propiedades del agente [] 2020-12-11 17: 29: 32.3435 10076 AppDynamics.Coordinator 1 6 Info RegistrationChannel setting agent install dir [C: \ Program Files \ AppDynamics \ AppDynamics .NET Agent \] 2020-12-11 17:29: 32.3435 10076 AppDynamics.Coordinator 1 6 Info RegistrationChannel setting agent type [MACHINE_AGENT] 2020-12-11 17: 29: 32.3435 10076 AppDynamics.Coordinator 1 6 Info RegistrationChannel setting agent application [CGP I] 2020-12-11 17: 29: 32.3435 10076 AppDynamics.Coordinator 1 6 Info RegistrationChannel setting agent tier name [Machine Agent] 2020-12-11 17: 29: 32.3435 10076 AppDynamics.Coordinator 1 6 Info RegistrationChannel nombre de nodo del agente de configuración [CURACAO] 2020-12-11 17: 29: 32.3435 10076 AppDynamics.Coordinator 1 6 Info RegistrationChannel Envío de solicitud de registro 2020-12- 11 17: 29: 32.3435 10076 AppDynamics.Coordinator 1 6 Warn RegistrationChannel No se pudo conectar al controlador / respuesta inválida del controlador, no se puede obtener información de registro 2020-12-11 17: 29: 40.7659 7564 w3wp 2 14 Info ConfigurationChannel Metainformación del nodo detectado : [Nombre: appdynamics.ip.addresses, Valor: 172.28.1.89,10.255.220.52, Nombre: supportsDevMode, Valor: verdadero] 2020-12-11 17: 29: 40.7659 7564 w3wp 2 14 Información de configuración Canal de envío de solicitud de registro con: Nombre de aplicación [% s], Nombre de nivel [% s], Nombre de nodo [% s], Nombre de host [% s] Nodo único ID local [% s], versión [% s] 2020-12-11 17: 29: 40.9224 3364 w3wp 2 18 Información AnnotationPropertyListenerManager NodeProperty registrado [tx-disabled-header-details-enabled] al método [Void setEnabledHeaderWithDetails (Boolean)] en la clase com.appdynamics.ee.agent.appagent.services.transactionmonitor.common.exitcall.DisablingHeaderGenerator 1 2020-12-11 17: 29: 47.0789 10444 w3wp 2 14 Info ConfigurationChannel Metainformación del nodo detectado: [Nombre: appdynamics.ip. direcciones, Valor: 172.28.1.89,10.255.220.52, Nombre: supportsDevMode, Valor: verdadero] 2020-12-11 17: 29: 47.0789 10444 w3wp 2 14 Configuración de información Canal de envío de solicitud de registro con: Nombre de aplicación [% s], Nombre de nivel [% s], Nombre de nodo [% s], Nombre de host [% s] Nodo único ID local [% s], versión [% s] 2020-12-11 17: 29: 49.0946 6388 w3wp 2 12 Info ConfigurationChannel Metainformación del nodo detectado: [Nombre: appdynamics.ip.addresses, Valor: 172.28.1.89,10.255. 220.52, Nombre: supportsDevMode, Valor: verdadero] 2020-12-11 17: 29: 49.0946 6388 w3wp 2 12 Configuración de información Canal de envío de solicitud de registro con: Nombre de aplicación [% s], Nombre de nivel [% s], Nombre de nodo [% s ], Nombre de host [% s] ID local único de nodo [% s], Versión [% s] 2020-12-11 17: 29: 50.3916 3364 w3wp 2 9 Información ConfigurationChannel Metainformación del nodo detectado: [Nombre: appdynamics.ip. direcciones, Valor: 172.28.1.89,10.255.220.52, Nombre: supportsDevMode, Valor: verdadero] 2020-12-11 17: 29: 50.3916 3364 w3wp 2 9 Información de configuración Canal de envío de solicitud de registro con: Nombre de aplicación [% s], Nombre de nivel [% s], Nombre de nodo [% s], Nombre de host [% s] Nodo único ID local [% s], versión [% s] ¡Por favor, ayúdame! Gracias !
Hello guys, one clustered index is now oversized due to lower indexed data since several months and data froze over time. The COLD size shows it only uses now 1 gb of ~4 gb so is it safe to reduce ... See more...
Hello guys, one clustered index is now oversized due to lower indexed data since several months and data froze over time. The COLD size shows it only uses now 1 gb of ~4 gb so is it safe to reduce the COLD path and maxtotaldatasizemb? In my opinion it should have no effect on the index, it should not freeze current data? Thanks for your help. Enterprise 7.1.4