All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Citry contains 12 names. in result i am able to see only city name with product if product is zero it is not showing the Citry name base search |stats count(product) AS Total BY City |fillnull va... See more...
Citry contains 12 names. in result i am able to see only city name with product if product is zero it is not showing the Citry name base search |stats count(product) AS Total BY City |fillnull value=0 City Citry Total citry1 1 citry5 50 citry10 15 expectation  Citry Total citry1 1 citry2 0 citry3 0 citry4 0 citry5 50 citry6 0 citry7 0 citry8 0 citry9 0 citry10 15 citry11 0 citry12 0
Hi i have uploaded a CSV file and would like to know if it is possible to only display the content in the file? Feature Business  Environment Bench Secs  Grade Risk Set offs Production ... See more...
Hi i have uploaded a CSV file and would like to know if it is possible to only display the content in the file? Feature Business  Environment Bench Secs  Grade Risk Set offs Production 300 10 Ops Count UAT 500 11
Hi All, I know the topic is quite extensively documented in several posts within splunk community but I could not really figure out what is best to apply in the case below. some context about ... See more...
Hi All, I know the topic is quite extensively documented in several posts within splunk community but I could not really figure out what is best to apply in the case below. some context about architecture in use We have basically 3 layers (so index) which a FE call goes through: apigee  mise (microservices) eCommerce applicaiton In other words, a call initiated by FE is routed to apigee from where it goes to some microservice which in turn might call the eCommerce application. The calls are uniquely identified by a requestId so given an apigee call I can get exactly which are the related calls to eCommerce because of the requestId. Splunk dashboard I'm building a dashboard where: panel 1: I list all apigee calls grouped by common url to get some stats out out of it (so far so good). Something like:   > oce/soe/orders/v1/*/billingProfile (normalized url), then display the columns: count, avg(duration), perc90(duration) > oce/soe/orders/v1/*/delivery-method (normalized url), then display the columns: count, avg(duration), perc90(duration) > ... panel 2: given the focus on an one apigee "normalized" call from panel one I list all related eCommerce calls. The goal is to grab some stats over such ep calls, grouping those by common urls and then average duration and taking perc90. To make this working I do use a join but it's very slow. Something like, given the focus on billingProfile apigee call above: > ecommerceapp/billinginfo/store/*/ (normalized url),  then display the columns: count, avg(duration), perc90(duration)   I want to ask if you see any other way to reach the same goal without a join or if you have any generally hint to improve the performance.  Thanks in advance, Vincenzo   I report below the index search I'm currently using highlighting the part of interest:           index=ecommerce_prod (namespace::prod-onl) (earliest="$timeRange1.earliest$" latest="$timeRange1.latest$") "Status=20*" | where isnotnull( SCS_Request_ID) | rex field=_raw "(?<operation>(?<![\w\d])(GET|POST|PUT|DELETE)(?![\w\d]))" | rex field=_raw "(?<=[/^(POST|GET|PUT|DELETE)$/] )(?<service>[\/a-zA-Z\.].+?(?=HTTP))" | rex field=_raw "(?<=Duration=)(?<duration>[0-9]*)" | eval temp=split(service,"/") | eval field1=mvindex(temp,1) | eval field2=mvindex(temp,2) | eval field3=mvindex(temp,3) | eval field4=mvindex(temp,4) | eval field4=if(like(field4,"%=%") OR like(field4,"%?%"), "*", field4) | eval field5=mvindex(temp,5)| eval field5=if(like(field5, "%=%") OR like(field5,"%?%"), "*", field5) | eval url_short=if(isnull(field5),field1."/".field2."/".field3."/".field4."/", field1."/".field2."/".field3."/".field4."/".field5) | eval fullName = operation." ".url_short | table SCS_Request_ID, operation, url_short, duration | join SCS_Request_ID [ search index="apigee" (earliest="$timeRange1.earliest$" latest="$timeRange1.latest$") status="20*" | rename tgrq_h_scs-request-id as SCS_Request_ID | table SCS_Request_ID | where isnotnull( SCS_Request_ID) ] | stats count, avg(duration) as avg_, perc90(duration) as perc90_ by operation, url_short | eval avg_=round(avg_,2) | eval perc90_=round(perc90_,2)<div> </div>          
Dear Splunk community, I am using rex to extract data from _raw and put it into new fields like so:     [10/5/21 23:02:25:134 CEST] 00000063 SystemOut O 05 Oct 2021 23:02:25:133 [INFO] [CRONS... See more...
Dear Splunk community, I am using rex to extract data from _raw and put it into new fields like so:     [10/5/21 23:02:25:134 CEST] 00000063 SystemOut O 05 Oct 2021 23:02:25:133 [INFO] [CRONSERVER] [CID-MXSCRIPT-1673979] SCRIPTNAME - 00 - Function:httpDiscovery(POST, https, host, /call, BASE64ENC(USER:PASSWORD)) Profile = MYPROFILE - Scope = MYHOSTNAME - End - Result(strResponseStatus, stResponseReason, strResponseData)=([200], [OK], [{"message":"SUCCESS"}{"runId":"2021100523022485"} ]) | rex field=_raw "Scope = (?<fqdn>\S*)" | rex field=_raw "Profile = (?<profile>\S*)"     This will create new fields and also show _raw. I don't want _raw to show, but if I use this:     | table _time     Instead of this:     | table _time, _raw,     The fields that I create will no longer show, so I have to include _raw aswell. I can use mode=sed when using rex to delete data from _raw and for example only keep profile and then rename _raw to profile, but I don't have any experience using sed and I would prefer a easier way. My question: Is it possible to hide _raw and still use rex on _raw to create new fields?   Thanks.  
Hi All, I am trying to merge  the rows of a column into one row for the below table: App_Name Country Last_Deployed Temp_Version com.citiao.cimainproject China 2021-09-24 13:30:04.39 1.0.12.... See more...
Hi All, I am trying to merge  the rows of a column into one row for the below table: App_Name Country Last_Deployed Temp_Version com.citiao.cimainproject China 2021-09-24 13:30:04.39 1.0.12.20210907193849359 com.citiao.cimainproject HongKong 2021-09-24 11:48:15.176 1.0.12.20210907193849359 com.citiao.cimainproject Indonesia 2021-09-10 13:17:38.254 1.0.12.20210907193849359 com.citiao.cimainproject Malaysia 2021-09-10 14:54:54.098 1.0.12.20210907193849359 com.citiao.cimainproject Philippines 2021-09-24 11:58:44.034 1.0.12.20210907193849359 com.citiao.cimainproject Singapore 2021-09-10 12:53:25.539 1.0.12.20210907193849359 com.citiao.cimainproject Thailand 2021-09-24 14:01:09.682 1.0.12.20210907193849359 com.citiao.cimainproject Vietnam 2021-09-10 15:00:06.598 1.0.12.20210907193849359   I used the query as below: my query | stats values(App_Temp_Name) as App_Name latest(LAST_DEPLOYED) as Last_Deployed latest(APP_TEMP_VER) as Temp_Version by Country | table App_Name,Country,Last_Deployed,Temp_Version But I need to merge the rows of the column App_Name as one row keeping others as it is like: App_Name Country Last_Deployed Temp_Version com.citiao.cimainproject China 2021-09-24 13:30:04.39 1.0.12.20210907193849359   HongKong 2021-09-24 11:48:15.176 1.0.12.20210907193849359   Indonesia 2021-09-10 13:17:38.254 1.0.12.20210907193849359   Malaysia 2021-09-10 14:54:54.098 1.0.12.20210907193849359   Philippines 2021-09-24 11:58:44.034 1.0.12.20210907193849359   Singapore 2021-09-10 12:53:25.539 1.0.12.20210907193849359   Thailand 2021-09-24 14:01:09.682 1.0.12.20210907193849359   Vietnam 2021-09-10 15:00:06.598 1.0.12.20210907193849359 Please help me modify the query to get the desired output.   Thank you very much..!!
Hi ,  I am trying to get the day wise error count by data message only if the yesterdays error count is more than 50 . index="eshop" NOT(index=k8*dev OR index=k8*test) tag=error | eval time=strfti... See more...
Hi ,  I am trying to get the day wise error count by data message only if the yesterdays error count is more than 50 . index="eshop" NOT(index=k8*dev OR index=k8*test) tag=error | eval time=strftime(_time,"%Y-%m-%d") |table time,data.message <condition if the previous day data message count is less than 50 then it should be ignored from the stats> | stats count by time,data.message
I am looking for O365 use cases related to MS teams, Sharepoint, Exchange , One drive, Currently data is populate in Azure and need to ingest use cases in Splunk.   What kind of use cases i can cre... See more...
I am looking for O365 use cases related to MS teams, Sharepoint, Exchange , One drive, Currently data is populate in Azure and need to ingest use cases in Splunk.   What kind of use cases i can create based on these data sources MS teams, Sharepoint, Exchange , One drive. Also looking for Malicious and threat level of  O365 use cases .   Please suggest
Hi i'm looking to use a heavy forwarder to append a string to specific log messages. Im following the guide here https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Anonymizedata (specifically th... See more...
Hi i'm looking to use a heavy forwarder to append a string to specific log messages. Im following the guide here https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Anonymizedata (specifically the "Anonymize data with a regular expression transform" part)which only seems to mask data, i dont want to alter the log entry as such but rather add something like "<Review Required>" to the end of the log that matches a specific regex. Can this be done using the heavy forwarder and transforms.conf?
Hello, I don't find solution here and I managed to get it  to work. First of all, if you want separate in many dashboards your seach you can do that. index="_internal" | timechart count by sourcet... See more...
Hello, I don't find solution here and I managed to get it  to work. First of all, if you want separate in many dashboards your seach you can do that. index="_internal" | timechart count by sourcetype You can activate trellis by sourcetype. But in each graph you want status (by exemple). Please try this query : index="_internal" | bin _time | stats count by _time, sourcetype, status | eval {status}=count | fields - status, count | fillnull value=0 | stats sum(*) as * by _time, sourcetype
Hello, I want to set up AppDynamics for my department. I logged into AppDynamics but the set-up for integrating sample application (from your GitHub repo) is too complex and nested. An applicat... See more...
Hello, I want to set up AppDynamics for my department. I logged into AppDynamics but the set-up for integrating sample application (from your GitHub repo) is too complex and nested. An application requires another application as a pre-requisite and the chain goes on. Is there anyone available for a demonstration for setting up any sample application so that I can look around the full features of the application to integrate AppDynamics in my organization? 
Hello, As per ES official documentation, it says below threat intel feeds are enabled by default.  Mozilla Public Suffix List MITRE ATT&CK Framework ICANN Top-level Domains List In a... See more...
Hello, As per ES official documentation, it says below threat intel feeds are enabled by default.  Mozilla Public Suffix List MITRE ATT&CK Framework ICANN Top-level Domains List In addition it also mentions these are  included   But when i check in our ES app settings >> Threat Intel management page, i see only 3 feeds as below.  Where are those default feeds mentioned above ?  
We have TA for Splunk and are using Splunk's internal library(splunk.entity) to fetch the credentials from passwords.conf file using the below code.    On some Splunk enterprise instances below cod... See more...
We have TA for Splunk and are using Splunk's internal library(splunk.entity) to fetch the credentials from passwords.conf file using the below code.    On some Splunk enterprise instances below code is working properly and returning username and clear password. But on some Splunk enterprise instances and Splunk cloud, it has been observed that the value of ['eai:acl']['app'] is Null due to which exception has been raised by the code.   We would like to know why some Splunk instances return the Null value for ['eai:acl']['app'].   Code -  import splunk.entity as entity myapp = 'APP-NAME' realm = 'APP-NAME-Api'   try:     entities = entity.getEntities(['admin', 'passwords'], namespace=myapp, owner='nobody', sessionKey=session_key)     except Exception as e:     raise Exception("Could not get %s credentials from Splunk. Error: %s" % (myapp, str(e)))   for i, c in list(entities.items()):     if c['eai:acl']['app'] == myapp and c['realm'] == realm:         return c['username'], c['clear_password']   raise Exception("No credentials found.") We also tried with the below CURL command but the field ['eai:acl']['app'] is missing in the response. curl -k -u <splunk_username>:<splunk_password> https://<host>:8089/services/storage/passwords    
I'm getting below error for the TA-defender-atp-hunting on our HF. Unable to initialize modular input "defender_hunting_query" defined in the app "TA-defender-atp-hunting": Introspecting scheme=defe... See more...
I'm getting below error for the TA-defender-atp-hunting on our HF. Unable to initialize modular input "defender_hunting_query" defined in the app "TA-defender-atp-hunting": Introspecting scheme=defender_hunting_query: script running failed (exited with code 1).. splunkd logs ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup': The script at path=/opt/splunk/etc/apps/TA-defender-atp-hunting/bin/TA_defender_atp_hunting_rh_defender_hunting_query.py has thrown an exception=Traceback (most recent call last) 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':File "/opt/splunk/bin/runScript.py", line 82, in <module> 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':exec(open(REAL_SCRIPT_NAME).read()) 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':File "<string>", line 4, in <module> 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':File "/opt/splunk/etc/apps/TA-defender-atp-hunting/bin/ta_defender_atp_hunting/splunktaucclib/rest_handler/endpoint/validator.py", line 388 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':except ValueError, exc: 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup': ^ 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':SyntaxError: invalid syntax 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':Traceback (most recent call last): 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':File "/opt/splunk/bin/runScript.py", line 82, in <module> 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':exec(open(REAL_SCRIPT_NAME).read()) 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':File "<string>", line 4, in <module> 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':File "/opt/splunk/etc/apps/TA-defender-atp-hunting/bin/ta_defender_atp_hunting/splunktaucclib/rest_handler/endpoint/validator.py", line 388 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':except ValueError, exc: 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup': ^ 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':SyntaxError: invalid syntax 10-06-2021 03:20:48.828 +0000 ERROR AdminManagerExternal - External handler failed with code '1' and output: ''. See splunkd.log for stderr output. I'm not able to access the defender atp hunting app via UI. Would anyone know how to resolve this issue?  Thanks in advance!
Hello, Can anyone please help me with the line breaking. Multiple Security events are merged into a single event, putting a highlight on the the timestamp where the event should break.   Event b... See more...
Hello, Can anyone please help me with the line breaking. Multiple Security events are merged into a single event, putting a highlight on the the timestamp where the event should break.   Event below should be split into 2.   sourcetype = claroty:cef   2021-10-02 14:37:17 User.Info 10.10.32.132 Oct 02 2021 04:37:17 server026 CEF:0|Claroty|CTD|4.3.1|Alert/Host Scan|Host Scan|10|src=10.206.164.120 smac=2c:00:00:0f:9a:ad:73 dmac=28:0000:61:f7:7e:c3 externalId=24459 cat=Security/Host Scan start=Oct 02 2021 12:43:36 msg=ICMP Host scan: Asset 10.5.164.10 sent packets to different IP destinations deviceExternalId=INDO - CIBITUNG cs1Label=SourceAssetType cs1=Endpoint cs3Label=SourceZone cs3=Endpoint: Other cccs4Label=DestZone cs4=Endpoint: MQTT cs6Label=CTDlink cs6= 2021-10-02 14:37:17 User.Info 10.10.32.132 Oct 02 2021 04:37:17 server026 CEF:0|Claroty|CTD|4.3.1|Alert/Host Scan|Host Scan|10|src=10.206.163.251 smac=28:00:00:f7:7e:cb shost=DESKTOP-0C11AFV dmac=00:1b:1b:2c:12:1a externalId=24460 cat=Security/Host Scan start=Oct 02 2021 12:43:42 msg=ICMP Host scan: Asset 10.51.163.251 sent packets to different IP destinations deviceExternalId=INDO - CIBITUNG cs1Label=SourceAssetType cs1=Endpoint cs3Label=SourceZone cs3=Endpoint: Other cccs4Label=DestZone cs4=PLC: S7 cs6Label=CTDlink cs6=   Current props.conf in search head [claroty:cef] LINE_BREAKER = ([\r\n]*)[0-9]{4}-(0[1-9]|1[0-2])-(0[1-9]|[1-2][0-9]|3[0-1])(2[0-3]|[01][0-9]):[0-5][0-9] BREAK_ONLY_BEFORE = ([\r\n]*)[0-9]{4}-(0[1-9]|1[0-2])-(0[1-9]|[1-2][0-9]|3[0-1])(2[0-3]|[01][0-9]):[0-5][0-9] REPORT-cs1 = claroty_cef_1 REPORT-cs2 = claroty_cef_2 REPORT-cs3 = claroty_cef_3 REPORT-cs4 = claroty_cef_4 REPORT-cs5 = claroty_cef_5 REPORT-cs6 = claroty_cef_6 REPORT-cs7 = claroty_cef_7 REPORT-cs8 = claroty_cef_8 REPORT-cs9 = claroty_cef_9 EXTRACT-msg = msg=(?<msg>.*?).x00$ EXTRACT-rt = rt=(?<rt>\w\w\w\s\d\d\s\d\d\d\d\s\d\d:\d\d:\d\d) EXTRACT-alert = Alert\|(?<Alert>.+?)\| EXTRACT-event = Event\|(?<Event>.+?)\| FIELDALIAS-AssignedTo = ResolvedAs AS AssignedTo Additional question once we got the correct settings to props.conf and running search again will the data be the same? or it will only work for the new events that will be ingested to splunk?
I use the below SPL to find how hosts are logging in my environment and how far off the timestamp of the last event sent by each host is from the current time. | eval time_zone_diff = Now() - last... See more...
I use the below SPL to find how hosts are logging in my environment and how far off the timestamp of the last event sent by each host is from the current time. | eval time_zone_diff = Now() - lastTime | eval recent_time = Now() - recentTime | sort time_diff The results are hard to read. Instead of reporting which hosts, it just shows so many are late & so many are ahead & no host is listed. Would you help with an SPL, not Meta Woot! App (could not configure it properly) to list which hosts are late & which are reporting event in the future please. Thanks a million in advance.
hi, My organisation is using Splunk SaaS and would like to integrate with ServiceNow using addons. In order to get security thumbs up i would like to know below: 1. Where the basic credentials of S... See more...
hi, My organisation is using Splunk SaaS and would like to integrate with ServiceNow using addons. In order to get security thumbs up i would like to know below: 1. Where the basic credentials of ServiceNow stored in Splunk Web UI, can someone edit it? Who has permission to view that password. 2. What TLS splunk SaaS uses to communicate with ServiceNow addons?
Hi, I am trying to import a saml cert into phantom 5.0.1. The phenv is deprecated in this version. Is there another script to do this or will a copy in the cacerts.pem work in this case? Thanks
I've seen a few of my colleagues recently use a command called multireport which seems to be largely undocumented to solve some search conditions that have been very challenging to solve otherwise.  ... See more...
I've seen a few of my colleagues recently use a command called multireport which seems to be largely undocumented to solve some search conditions that have been very challenging to solve otherwise.  Is there a splunk dev who is close to the developer of multireport who would care to shed light into a full synopsys of how this command can be used and any potential arguments it accepts?
I'm having trouble getting all the fields from sysmon automatically parse with the microsoft sysmon add in could someone tell me what i might be missing? The events are coming into my home splunk in... See more...
I'm having trouble getting all the fields from sysmon automatically parse with the microsoft sysmon add in could someone tell me what i might be missing? The events are coming into my home splunk instance (8.2.2) but not being fully parsed correctly, I'm pretty sure i need to use a transform, but the one I've tried isn't working (I'm pretty sure i did it wrong but *shrug* no idea if i did or not) I've installed sysmon on my home computer and have the universal forwarder pointed to my home splunk instance. I followed the guide i found here: https://hurricanelabs.com/splunk-tutorials/splunking-with-sysmon-series-part-1-the-setup/ As you can see in the screenshot it only extracted some of the fields and the IMPHASH value carried over into some other data.   inputs.conf for sysmon       [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = false renderXml = true source = XMLWinEventLog:Microsoft-Windows-Sysmon/Operational   output of my transform with the path:   cat /opt/splunk/etc/apps/search/default/transforms.conf [geo_us_states] external_type = geo filename = geo_us_states.kmz [geo_countries] external_type = geo filename = geo_countries.kmz [geo_attr_us_states] filename = geo_attr_us_states.csv [geo_attr_countries] filename = geo_attr_countries.csv [geo_hex] external_type=geo_hex [xmlwineventlog] REGEX = "Data Name\=\'(?<_KEY_1>[A-Za-z]+)\'>(?<_VAL_1>[^<]+)<\/Data>" DELIMS = "'>"       Here's a sample event straight from _raw (looked this event over nothing seemed overly sensitive)       <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Sysmon' Guid='{5770385f-c22a-43e0-bf4c-06f5698ffbd9}'/><EventID>1</EventID><Version>5</Version><Level>4</Level><Task>1</Task><Opcode>0</Opcode><Keywords>0x8000000000000000</Keywords><TimeCreated SystemTime='2021-10-05T22:36:40.9004216Z'/><EventRecordID>7090</EventRecordID><Correlation/><Execution ProcessID='5908' ThreadID='7428'/><Channel>Microsoft-Windows-Sysmon/Operational</Channel><Computer>eelo</Computer><Security UserID='S-1-5-18'/></System><EventData><Data Name='RuleName'>-</Data><Data Name='UtcTime'>2021-10-05 22:36:40.899</Data><Data Name='ProcessGuid'>{ce4bb586-d378-615c-5b1e-000000007100}</Data><Data Name='ProcessId'>10476</Data><Data Name='Image'>C:\Program Files\SplunkUniversalForwarder\bin\splunk-powershell.exe</Data><Data Name='FileVersion'>-</Data><Data Name='Description'>-</Data><Data Name='Product'>-</Data><Data Name='Company'>-</Data><Data Name='OriginalFileName'>-</Data><Data Name='CommandLine'>"C:\Program Files\SplunkUniversalForwarder\bin\splunk-powershell.exe"</Data><Data Name='CurrentDirectory'>C:\WINDOWS\system32\</Data><Data Name='User'>NT AUTHORITY\SYSTEM</Data><Data Name='LogonGuid'>{ce4bb586-a1ce-615c-e703-000000000000}</Data><Data Name='LogonId'>0x3e7</Data><Data Name='TerminalSessionId'>0</Data><Data Name='IntegrityLevel'>System</Data><Data Name='Hashes'>MD5=1F8722C371906F7B659FA38B39B21661,SHA256=383581B2E6BE7003CCCC0DAFAE75CBA3B0885C441ACDBD9AE76EAAFD9602A022,IMPHASH=1BDECF92268D3D3EF70015DDFEB0FFB9</Data><Data Name='ParentProcessGuid'>{ce4bb586-d191-615c-2a1d-000000007100}</Data><Data Name='ParentProcessId'>18812</Data><Data Name='ParentImage'>C:\Program Files\SplunkUniversalForwarder\bin\splunkd.exe</Data><Data Name='ParentCommandLine'>"C:\Program Files\SplunkUniversalForwarder\bin\splunkd.exe" service</Data></EventData></Event>      
We are running a  single Splunk Enterprise 8.1 instance on a Linux AWS EC2 instance. We are using smartstore backed by S3. We have /opt/splunk (including the index volume/cache) on a separate partiti... See more...
We are running a  single Splunk Enterprise 8.1 instance on a Linux AWS EC2 instance. We are using smartstore backed by S3. We have /opt/splunk (including the index volume/cache) on a separate partition from the OS.  I have not seen/found any documentation addressing OS level partitioning.  Are there performance concerns with our current configuration? Would/could a spike in the size of non-index files on the splunk partition cause performance issues or caching abnormalities with smartstore? Does it make sense to separate the indexes (./lib/) onto its own partition, separate from both the OS and splunk config/app/log and other transient files?