All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everybody, I have a report that is generated every week. I want to name the title of the report with the previous week number. I use the « action.email.reportFileName » field to choose th... See more...
Hello everybody, I have a report that is generated every week. I want to name the title of the report with the previous week number. I use the « action.email.reportFileName » field to choose the report generate name For example :. We are the 2022/02/11  which is 6th week of the year. The report is scheduled today but I want to mention the W-1 week -> so the number 5. I identified that with the variable %V I can dynamically generate the name of the report with the current week. I'm looking for a trick to put the number of the past week If someone has a solution please Kind regards !
Hello, I have a question about the Splunk Add-on for Crowdstrike FDR developed by Splunk - I would like to filter out events in addition to what the add-on provides - that is filtering by event_sim... See more...
Hello, I have a question about the Splunk Add-on for Crowdstrike FDR developed by Splunk - I would like to filter out events in addition to what the add-on provides - that is filtering by event_simpleName. My exact use case is I want to drop events with IsOnRemovableDisk\"\:\"1 in the raw message. I tried to do it using props/transforms applying to the appropriate sourcetype, yet it does not seem to be applied at all. Even with such a simple config like this: props.conf:   [crowdstrike:events:sensor] TRANSFORMS-usb = do_not_index   transforms.conf:   [do_not_index] REGEX = . DEST_KEY = queue FORMAT = nullQueue   Where I expected all the events to be dropped, it does not get applied and all the events except what is configured with the Event Filter in the add-on are ingested into Splunk. Am I missing anything there? Is it even possible to filter events more in detail with Splunk Add-on for Crowdstrike FDR based on the raw data of events?
I have 1 Splunk server. It is search head, indexer and deployment server. I have sysmon and splunk universal forwarder installed on my clients. I also have Splunk_TA_microsoft_sysmon installed under ... See more...
I have 1 Splunk server. It is search head, indexer and deployment server. I have sysmon and splunk universal forwarder installed on my clients. I also have Splunk_TA_microsoft_sysmon installed under /opt/splunk/etc/apps. The app is installed on client. The sysmon client logs are getting to indexer but they are going to main index. I want to change this to the sysmon index (newly created). I have tried creating a /local/inputs.conf file on deployment server with the index = sysmon [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = false renderXml = 1 index = sysmon I expected it to change the  inputs.conf of the client side, but that never happens. It seems as thought the client is honoring another .conf file. I am not sure what I am missing. Any advise would be appreciated.
We have a couple of processes that runs regularly and I want to capture the errors and groups them run wise and date wise. I tried with transactions but its not splitting run wise and gave all the er... See more...
We have a couple of processes that runs regularly and I want to capture the errors and groups them run wise and date wise. I tried with transactions but its not splitting run wise and gave all the errors in the same group. Please help thanks LogDate = "01/28/2022 03:00:47.417" , LogNo = "133" , LogLevel = "INFO" , LogType = "Bot End" , LogMessage = "Logger Session Stopped; Total run time: 0:17:22.002" , TimeTaken = "0:00:00.500" , ProcessName = "FARollforward" , TaskName = "Logger" , RPAEnvironment = "PROD" , LogId = "0133010____120220128030047417" , MachineName = "xxxxx" , User = "xxxxxx" LogDate = "01/28/2022 03:00:38.679" , LogNo = "125" , LogLevel = "ERROR" , LogType = "Process Level" , LogMessage = "EXCEPTION: CustomSubTaskError;" , TimeTaken = "0:00:00.005" , ProcessName = "FARollforward" , TaskName = "NavigateOracle" , RPAEnvironment = "PROD" , LogId = "0125010____120220128030038679" , MachineName = "xxxxx" , User = "xxxxxx" LogDate = "01/28/2022 01:01:47.004" , LogNo = "51" , LogLevel = "ERROR" , LogType = "Process Level" , LogMessage = "EXCEPTION: Unable to perform LEFTCLICK action. , TimeTaken = "0:00:00.017" , ProcessName = "FARollforward" , TaskName = "FARollforward-NavigateOracle" , RPAEnvironment = "PROD" , LogId = "0051010____120220128010147004" , MachineName = "xxxxxxx" , User = "xxxxxx" LogDate = "01/27/2022 23:59:20.534" , LogNo = "1" , LogLevel = "INFO" , LogType = "Bot Start" , LogMessage = "Logger Session Started" , TimeTaken = "0:00:00.000" , ProcessName = "FARollforward" , TaskName = "Logger" , RPAEnvironment = "PROD" , LogId = "0001010____120220127235920534" , MachineName = "xxxxxx" , User = "xxxxx"   ProcessName Errors Date FARollForward EXCEPTION: CustomSubTaskError EXCEPTION: Unable to perform LEFTCLICK action 01/28/2022 Cp EXCEPTION: CustomSubTaskError EXCEPTION: Unable to perform LEFTCLICK action Exception: Failed 02/07/2022 FARollForward EXCEPTION: CustomSubTaskError EXCEPTION: Unable to perform LEFTCLICK action 02/08/2022  
Hello everyone, I'm going to try to be clear with what I'm trying to do. I did an search that list some computer with different criticity level and owner like this Owner IP Risk CVE... See more...
Hello everyone, I'm going to try to be clear with what I'm trying to do. I did an search that list some computer with different criticity level and owner like this Owner IP Risk CVE User A 10.10.10.10 Critical xxxxx User B 10.10.10.11 Critical xxxxx   I set an alert that trig for each result if "owner > 0" and I send an email on the IT support with the pdf result search attached. Now an alert is send for each Owner by IP machine it's fine but tha attached pdf on the email containt the all result and I would like that the pdf containt uniqulely the result that concern the owner so like this.  Need I to filtre the search by owner and create an alert for each Owner ?  Owner IP Risk CVE User A 10.10.10.10 Critical xxxxxx   Don't know if it's very clear Regards,
I have 2 Dashboards and the second dashboard is a drill-down for the 1st one. Everything is working as expected but in the second dashboard, the post-processing search is not working. I want to hide... See more...
I have 2 Dashboards and the second dashboard is a drill-down for the 1st one. Everything is working as expected but in the second dashboard, the post-processing search is not working. I want to hide the rows if any of the panels in that row has 0 as output.I tried many ways but not working. I'm pasting the code for 2 dashboards here, please let me know what is missing. Thanks for the help Dashboard: 1 has this drill down: <drilldown> <link target="_blank">/app/search/business_detailed?form.time_second_dashboard.earliest=$field1.earliest$&amp;form.time_second_dashboard.latest=$field1.latest$&amp;form.environment=$env$&amp;form.task=$click.value$</link> </drilldown>   Dashboard 2 : Code   <form> <label>business_detailed</label> <fieldset submitButton="false"> <input type="time" token="time_second_dashboard"> <label>Select Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="environment"> <label>Environment</label> <choice value="&quot;UAT&quot;">UAT</choice> <choice value="&quot;PROD&quot;">PROD</choice> <fieldForLabel>env</fieldForLabel> <fieldForValue>env</fieldForValue> </input> <input type="dropdown" token="task"> <label>BOT Process</label> </input> </fieldset> <row > <panel rejects= "$panel_show$" > <single> <search> <query>| makeresults |eval bot="cp Main"|table bot</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> <panel rejects= "$panel_show$" > <single> <title>Total Runs</title> <search> <query>index = "abc" env =$environment$ LogType = "*" TaskName = $task$-Main | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | rename TaskName as "Task Name", host as "VDI" | stats count(eval(LogMessage = "FATAL: process ended errorneously")) as Failed_Count, count(eval(LogMessage = "END: cp-Main execution")) as Success_Count1 |eval tot_count= Failed_Count + Success_Count1|table tot_count</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> <progress> <condition match="$job.resultCount$ == 0"> <set token="panel_show">true</set> </condition> <condition> <unset token="panel_show"></unset> </condition> </progress> </search> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="link.visible">0</option> <option name="rangeColors">["0x53a051","0x0877a6","0xdc4e41"]</option> <option name="rangeValues">[0,100000000]</option> <option name="refresh.display">progressbar</option> <option name="refresh.link.visible">1</option> <option name="useColors">1</option> </single> </panel> <panel rejects="$panel_show$"> <single> <title>Process completed Successfully</title> <search> <query>index = "abc" env = $environment$ LogType = "*" TaskName = $task$-Main LogMessage= "END: cp-Main execution" | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | table Time, LogNo, host, LogType, LogMessage, TaskName | rename LogMessage as "Log Message", TaskName as "Task Name", host as "VDI" | sort - Time|stats count</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0xdc4e41"]</option> <option name="rangeValues">[1000000000]</option> <option name="refresh.display">progressbar</option> <option name="link.visible">false</option> <option name="refresh.link.visible">true</option> <option name="useColors">1</option> </single> </panel> <panel rejects="$panel_show$"> <single> <title>Process completed with Error</title> <search> <query>index = "abc" env = $environment$ LogType = "*" TaskName = $task$-Main "FATAL: process ended errorneously"| eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | table Time, LogNo, host, LogType, LogMessage, TaskName | rename LogMessage as "Log Message", TaskName as "Task Name", host as "VDI" | sort - Time|stats count</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="rangeColors">["0xdc4e41","0xdc4e41"]</option> <option name="rangeValues">[10000000]</option> <option name="refresh.display">progressbar</option> <option name="link.visible">false</option> <option name="refresh.link.visible">true</option> <option name="useColors">1</option> </single> </panel> <panel rejects="$panel_show$"> <single> <title>Success Percent</title> <search> <query>index = "abc" env = $environment$ LogType = "*" TaskName =$task$-Main | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | rename TaskName as "Task Name", host as "VDI" | stats count(eval(LogMessage = "FATAL: process ended errorneously")) as Failed_Count, ,count(eval(LogMessage = "END: cp-Main execution")) as Success_Count1 | eval tot_count= Failed_Count + Success_Count1 | eval succ_per=round((Success_Count1/tot_count)*100,0)|table succ_per</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x581845","0xdc4e41"]</option> <option name="rangeValues">[100]</option> <option name="refresh.display">progressbar</option> <option name="link.visible">false</option> <option name="refresh.link.visible">true</option> <option name="unit">%</option> <option name="useColors">1</option> </single> </panel> </row> <row depends="$panel_show1$"> <panel> <single> <search> <query>| makeresults |eval bot="cp Adhoc"|table bot</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> <panel> <single> <title>Total Runs</title> <search> <query>index = "abc" env =$environment$ LogType = "*" TaskName = $task$-Main-Adhoc | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | rename TaskName as "Task Name", host as "VDI" | stats count(eval(LogMessage = "FATAL: process ended errorneously")) as Failed_Count, count(eval(LogMessage = "END: process execution")) as Success_Count1 |eval tot_count= Failed_Count + Success_Count1|table tot_count</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> <sampleRatio>1</sampleRatio> <progress> <condition match="'job.resultCount' > 0"> <set token="panel_show1">true</set> <unset token="panel_hide1"></unset> </condition> <condition> <set token="panel_hide1">true</set> <unset token="panel_show1"></unset> </condition> </progress> </search> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="link.visible">0</option> <option name="rangeColors">["0x53a051","0x0877a6","0xdc4e41"]</option> <option name="rangeValues">[0,100000000]</option> <option name="refresh.display">progressbar</option> <option name="refresh.link.visible">1</option> <option name="useColors">1</option> </single> </panel> <panel> <single> <title>Process completed Successfully</title> <search> <query>index = "abc" env = $environment$ LogType = "*" TaskName = $task$-Main-Adhoc LogMessage= "END: process execution" | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | table Time, LogNo, host, LogType, LogMessage, TaskName | rename LogMessage as "Log Message", TaskName as "Task Name", host as "VDI" | sort - Time|stats count</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0xdc4e41"]</option> <option name="rangeValues">[1000000000]</option> <option name="refresh.display">progressbar</option> <option name="link.visible">false</option> <option name="refresh.link.visible">true</option> <option name="useColors">1</option> </single> </panel> <panel> <single> <title>Process completed with Error</title> <search> <query>index = "abc" env = $environment$ LogType = "*" TaskName = $task$-Main-Adhoc "FATAL: process ended errorneously"| eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | table Time, LogNo, host, LogType, LogMessage, TaskName | rename LogMessage as "Log Message", TaskName as "Task Name", host as "VDI" | sort - Time|stats count</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="rangeColors">["0xdc4e41","0xdc4e41"]</option> <option name="rangeValues">[10000000]</option> <option name="refresh.display">progressbar</option> <option name="link.visible">false</option> <option name="refresh.link.visible">true</option> <option name="useColors">1</option> </single> </panel> <panel> <single> <title>Success Percent</title> <search> <query>index = "abc" env = $environment$ LogType = "*" TaskName =$task$-Main-Adhoc | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") |eval LogDescription = trim(replace(LogDescription, "'", "")) |eval LogMessage = trim(replace(LogMessage, "'", "")) |eval TaskName = trim(replace(TaskName, "'", "")) |eval host=substr(host,12,4) | rename TaskName as "Task Name", host as "VDI" | stats count(eval(LogMessage = "FATAL: process ended errorneously")) as Failed_Count, ,count(eval(LogMessage = "END: process execution")) as Success_Count1 | eval tot_count= Failed_Count + Success_Count1 | eval succ_per=round((Success_Count1/tot_count)*100,0)|table succ_per</query> <earliest>$time_second_dashboard.earliest$</earliest> <latest>$time_second_dashboard.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x581845","0xdc4e41"]</option> <option name="rangeValues">[100]</option> <option name="refresh.display">progressbar</option> <option name="link.visible">false</option> <option name="refresh.link.visible">true</option> <option name="unit">%</option> <option name="useColors">1</option> </single> </panel> </row> </form>   In the second Dashboard I want to hide the entire row if any Total Runs panel has 0 as output . I tried but its not working. Is there anything messing up with the tokens from dashboard -1   I tried both depends and rejects but its not working  <progress> <condition match="$job.resultCount$ == 0"> <set token="panel_show">true</set> </condition> <condition> <unset token="panel_show"></unset> </condition> </progress>
Greetings! I need to know how I can find the most use cases trigger alerts in Splunk. is there any specific search query that can help? I need the use case name and the count of the alerts 
How to disable CBC mode and to use 3DES in universal forwarder 8089 port?
Hi,  i've configured Alert Manager App and an alert in the same app. If i check the index (index=alerts), i can find only event with sourcetype = incident_change instead of sourcetype=alert_metad... See more...
Hi,  i've configured Alert Manager App and an alert in the same app. If i check the index (index=alerts), i can find only event with sourcetype = incident_change instead of sourcetype=alert_metadata.   Anyone have the same issue?
Hi, I want to create the alert using which I could get the email notification if the count of events has crossed a particular threshold  between start of month till 15th day of month. my query is t... See more...
Hi, I want to create the alert using which I could get the email notification if the count of events has crossed a particular threshold  between start of month till 15th day of month. my query is this: index=akm_ing "xyz.ex.com" "aagkeyid":"49005" |stats count | where count > 600000 Can you please help me in how to achieve this
Im using the Splunk Add-on for Microsoft Cloud Services, and I'm trying to ingest EHs. We've setup th SP, and Enterprise app, created EHs. and configured account and inputs in addon yyyy-dd-mm 11:... See more...
Im using the Splunk Add-on for Microsoft Cloud Services, and I'm trying to ingest EHs. We've setup th SP, and Enterprise app, created EHs. and configured account and inputs in addon yyyy-dd-mm 11:36:20,813 level=INFO pid=18561 tid=MainThread logger=__main__ pos=mscs_azure_event_hub.py:_try_creating_blob_checkpoint_store:567 datainput="AzureEH" start_time= message="Blob checkpoint store not configured" yyyy-dd-mm 11:36:14,786 level=INFO pid=18427 tid=MainThread logger=splunksdc.loop pos=loop.py:is_aborted:38 datainput="AzureEH" start_time= message="Loop has been aborted."
Hi Team, We are trying to build a dashboard for the Azure PIM logs in splunk to visualize who all are elevating their admin roles in Azure and what are the activities they are performing and how of... See more...
Hi Team, We are trying to build a dashboard for the Azure PIM logs in splunk to visualize who all are elevating their admin roles in Azure and what are the activities they are performing and how often they require the role, unfortunately we are not able to filter the action in splunk. In the operations  list we couldn't identify anything related to PIM. please help with the search index index=client* sourcetype="o365:management:activity" Workload=AzureActiveDirectory action Regards, Sai
Hi I have the below code, however, as I grow the number of lines I am giving the MAP is it getting very slow. Is there any way to run the map in parallel?   | map maxsearches=21 search="| sav... See more...
Hi I have the below code, however, as I grow the number of lines I am giving the MAP is it getting very slow. Is there any way to run the map in parallel?   | map maxsearches=21 search="| savedsearch "$ALERT$" host_token=PDT SERVICE_EARLIEST_TIME=1643954400 time_token.earliest=1644213600 time_token.latest=1644268200 Threshold=$Threshold$ | appendcols [ | makeresults | eval Order="$Order$",Threshold=$Threshold$ | fillnull count ] | table ALERT count Order Threshold "   Thanks in advance Rob
My Query is  index=windows Type=Disk host IN (abc) FileSystem="*" DriveType="*" Name="*" | dedup host, Name | table _time, host, Name | sort host, Name | join type=left host [| search index=perfmon... See more...
My Query is  index=windows Type=Disk host IN (abc) FileSystem="*" DriveType="*" Name="*" | dedup host, Name | table _time, host, Name | sort host, Name | join type=left host [| search index=perfmon source="Perfmon:CPU" object=Processor collection=CPU counter="% Processor Time" instance=_Total host IN (abc) | convert num(Value) as value num(pctCPU) as value | stats avg(value) as "CPUTrend" max(value) as cpu_utz by host | eval "Max Peak CPU" = round(cpu_utz, 2) | eval "CPUTrend"=round(CPUTrend, 2) | fields - cpu_utz | sort -"Peak CPU" | rename "Max Peak CPU" AS "maxCPUutil" | dedup "maxCPUutil" | table _time, host, "maxCPUutil"] | table host, "maxCPUutil", Name I have this below output host maxCPUutil Name host maxCPUutil Name abc 5.59 c: abc 5.59 E: abc 5.59 F: What i want is my result has multiple hosts.. Not single host. Output should be  1. abc 35.16 C: 2. ‌ ‌ 3. E: 4. def 45.56 C: 5. I: 6. J Please help me remove the repeated values for drive letter. I need it only once for single host 
Hi people! We've noticed that our VictorOps/Splunk On-Call mobile app asks to log in again from time to time. Question: if the app asks me to re-log in and an incident happens before I do it, wi... See more...
Hi people! We've noticed that our VictorOps/Splunk On-Call mobile app asks to log in again from time to time. Question: if the app asks me to re-log in and an incident happens before I do it, will my mobile app receive a push notification for the incident? Or will I have to log in first to receive further notifications? Thanks in advance!
I have already successfully appdynamics app server agent on seven application servers of my WebSphere network deployment environment. Unfortunately, the setup on the last server I could not successfu... See more...
I have already successfully appdynamics app server agent on seven application servers of my WebSphere network deployment environment. Unfortunately, the setup on the last server I could not successfully finish my setup. If I set the following parameter on my JVM: javaagent:/opt/IBM/AppAgents/AppServerAgent/javaagent.jar -Dappdynamics.agent.tierName=<placeholde> -Dappdynamics.agent.nodeName=<placeholder> I see the following error on the log of the app server agent: [AD Agent init] 09 Feb 2022 16:11:21,251 INFO ConfigurationChannel - Container id retrieval enabled: true [AD Agent init] 09 Feb 2022 16:11:21,252 WARN ConfigurationChannel - Unable to use /proc/self/cgroup for unique hostname, could not locate container ID [AD Agent init] 09 Feb 2022 16:11:21,252 INFO ConfigurationChannel - Agent node meta-info thus far: ProcessID;111140;appdynamics.ip.addresses;21.0.11.44,21.1.11.44;appdynamicsHostName;xxxx [AD Agent init] 09 Feb 2022 16:11:21,252 INFO ConfigurationChannel - Detected node meta info: [Name:ProcessID, Value:111140, Name:appdynamics.ip.addresses, Value:xxxx,xxxx, Name:appdynamicsHostName, Value:xxxx, Name:supportsDevMode, Value:true] [AD Agent init] 09 Feb 2022 16:11:21,252 INFO ConfigurationChannel - Sending Registration request with: Application Name [xxxx], Tier Name [xxxx], Node Name [xxxx], Host Name [xxxx] Node Unique Local ID [xxxx], Version [Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] [AD Agent init] 09 Feb 2022 16:11:21,309 WARN AgentErrorProcessor - Agent error occurred, [name,transformId]=[com.singularity.XMLConfigManager - javax.xml.parsers.FactoryConfigurationError,2147483647] [AD Agent init] 09 Feb 2022 16:11:21,309 WARN AgentErrorProcessor - 4 instance(s) remaining before error log is silenced [AD Agent init] 09 Feb 2022 16:11:21,309 WARN XMLConfigManager - Error refreshing agent configuration javax.xml.parsers.FactoryConfigurationError: Provider javax.xml.parsers.DocumentBuilderFactory could not be instantiated: java.util.ServiceConfigurationError: javax.xml.parsers.DocumentBuilderFactory: Provider org.apache.xerces.jaxp.DocumentBuilderFactoryImpl not a subtype at javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source) ~[?:?] at java.util.prefs.XmlSupport.loadPrefsDoc(XmlSupport.java:252) ~[?:1.8.0] at java.util.prefs.XmlSupport.importMap(XmlSupport.java:388) ~[?:1.8.0] at java.util.prefs.FileSystemPreferences$6.run(FileSystemPreferences.java:598) ~[?:1.8.0] at java.util.prefs.FileSystemPreferences$6.run(FileSystemPreferences.java:591) ~[?:1.8.0] at java.security.AccessController.doPrivileged(AccessController.java:734) ~[?:1.8.0] at java.util.prefs.FileSystemPreferences.loadCache(FileSystemPreferences.java:590) ~[?:1.8.0] at java.util.prefs.FileSystemPreferences.initCacheIfNecessary(FileSystemPreferences.java:573) ~[?:1.8.0] at java.util.prefs.FileSystemPreferences.getSpi(FileSystemPreferences.java:550) ~[?:1.8.0] at java.util.prefs.AbstractPreferences.get(AbstractPreferences.java:298) ~[?:1.8.0] at com.ibm.crypto.pkcs11impl.provider.IBMPKCS11Impl.<init>(IBMPKCS11Impl.java:464) ~[ibmpkcs11impl.jar:8.0 build_8.0-20200224-2] at java.lang.J9VMInternals.newInstanceImpl(Native Method) ~[?:2.9 (06-01-2020)] at java.lang.Class.newInstance(Class.java:1852) ~[?:2.9 (06-01-2020)] at sun.security.jca.ProviderConfig$2.run(ProviderConfig.java:233) ~[?:1.8.0] at sun.security.jca.ProviderConfig$2.run(ProviderConfig.java:218) ~[?:1.8.0] at java.security.AccessController.doPrivileged(AccessController.java:678) ~[?:1.8.0] at sun.security.jca.ProviderConfig.doLoadProvider(ProviderConfig.java:218) ~[?:1.8.0] at sun.security.jca.ProviderConfig.getProvider(ProviderConfig.java:199) ~[?:1.8.0] at sun.security.jca.ProviderList.getProvider(ProviderList.java:245) ~[?:1.8.0] at sun.security.jca.ProviderList.getIndex(ProviderList.java:275) ~[?:1.8.0] at sun.security.jca.ProviderList.getProviderConfig(ProviderList.java:259) ~[?:1.8.0] at sun.security.jca.ProviderList.getProvider(ProviderList.java:265) ~[?:1.8.0] at java.security.Security.getProvider(Security.java:479) ~[?:1.8.0] at com.ibm.jsse2.ac.<clinit>(ac.java:159) ~[?:8.0 build_20200327--103] at com.ibm.jsse2.ag.<init>(ag.java:18) ~[?:8.0 build_20200327--103] at com.ibm.jsse2.ag.<init>(ag.java:111) ~[?:8.0 build_20200327--103] at com.ibm.jsse2.av.a(av.java:87) ~[?:8.0 build_20200327--103] at com.ibm.jsse2.av.<init>(av.java:636) ~[?:8.0 build_20200327--103] at com.ibm.jsse2.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:2) ~[?:8.0 build_20200327--103] at com.singularity.ee.util.httpclient.EasySSLProtocolSocketFactory.createLayeredSocket(EasySSLProtocolSocketFactory.java:135) ~[appagent.jar:?] at com.singularity.ee.util.httpclient.EasySSLProtocolSocketFactory.connectSocket(EasySSLProtocolSocketFactory.java:193) ~[appagent.jar:?] at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ~[httpclient-4.5.13.jar:4.5.13] at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) ~[httpclient-4.5.13.jar:4.5.13] at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) ~[httpclient-4.5.13.jar:4.5.13] at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) ~[httpclient-4.5.13.jar:4.5.13] at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) ~[httpclient-4.5.13.jar:4.5.13] at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) ~[httpclient-4.5.13.jar:4.5.13] at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) ~[httpclient-4.5.13.jar:4.5.13] at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) ~[httpclient-4.5.13.jar:4.5.13] at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72) ~[httpclient-4.5.13.jar:4.5.13] at com.singularity.ee.util.httpclient.SimpleHttpClientWrapper.executeHttpOperation(SimpleHttpClientWrapper.java:302) ~[appagent.jar:?] at com.singularity.ee.util.httpclient.SimpleHttpClientWrapper.executeHttpOperation(SimpleHttpClientWrapper.java:217) ~[appagent.jar:?] at com.singularity.ee.rest.RESTRequest.sendRequestTracked(RESTRequest.java:384) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at com.singularity.ee.rest.RESTRequest.sendRequest(RESTRequest.java:337) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at com.singularity.ee.rest.controller.request.AControllerRequest.sendRequest(AControllerRequest.java:130) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at com.singularity.ee.rest.controller.request.ABinaryControllerRequest.sendRequest(ABinaryControllerRequest.java:36) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at com.singularity.ee.agent.appagent.kernel.config.xml.ConfigurationChannel.registerApplicationServer(ConfigurationChannel.java:1436) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at com.singularity.ee.agent.appagent.kernel.config.xml.ConfigurationChannel.access$100(ConfigurationChannel.java:121) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at com.singularity.ee.agent.appagent.kernel.config.xml.ConfigurationChannel$UnregisteredConfigurationState.nextTransition(ConfigurationChannel.java:784) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at com.singularity.ee.agent.appagent.kernel.config.xml.ConfigurationChannel.refreshConfiguration(ConfigurationChannel.java:554) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at com.singularity.ee.agent.appagent.kernel.config.xml.XMLConfigManager$AgentConfigurationRefreshTask.run(XMLConfigManager.java:656) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at com.singularity.ee.agent.appagent.kernel.config.xml.XMLConfigManager.initialize(XMLConfigManager.java:332) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at com.singularity.ee.agent.appagent.kernel.AgentKernel.start(AgentKernel.java:166) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at com.singularity.ee.agent.appagent.kernel.JavaAgent.initialize(JavaAgent.java:451) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at com.singularity.ee.agent.appagent.kernel.JavaAgent.initialize(JavaAgent.java:346) ~[appagent.jar:Server Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90) ~[?:1.8.0] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) ~[?:1.8.0] at java.lang.reflect.Method.invoke(Method.java:508) ~[?:1.8.0] at com.singularity.ee.agent.appagent.AgentEntryPoint$1.run(AgentEntryPoint.java:655) ~[?:Server IBM Agent #21.11.3.33314 v21.11.3 GA compatible with 4.4.1.0 r1f94344f9fd88fc14fe39a33494b03e4bb555a6d release/21.11.0] Due to this error the agent could not register sucessfully at the controller. Does anyone have an idea how to solve this problem. Is this problem maybe related to the security provider com.ibm.crypto.pkcs11impl.provider.IBMPKCS11Impl which is listed in the file java.security of the jdk on this server? On the other servers which I could sucessfully register at the controller this security provider is not listed in the java.security file. But maybe the cause for the problem is not related to this security provider. 
Are there any logs maintained by the Splunk Universal forwarder in case of log processing failures? I would like to setup an alert and dashboard for the log processing failures that occur while the U... See more...
Are there any logs maintained by the Splunk Universal forwarder in case of log processing failures? I would like to setup an alert and dashboard for the log processing failures that occur while the Universal forwarder tries  logs to my indexer. I need this for compliance purposes. I checked out the URL - https://docs.splunk.com/Documentation/Splunk/8.2.4/Troubleshooting/WhatSplunklogsaboutitself, and thought splunkd_stderr.log on my servers would do the trick. But when I checked the corresponding logs, I could see only start and stop messages. Can someone provide me the keyword search and the filename for log processing failures in the Splunk Universal Forwarder?
Hi   We have installed Splunk universal forwarder on a remote server but logs are not getting forwarded to Indexer. I have tried to troubleshoot this issue but could not do so. Can you please hel... See more...
Hi   We have installed Splunk universal forwarder on a remote server but logs are not getting forwarded to Indexer. I have tried to troubleshoot this issue but could not do so. Can you please help me to get rid of this issue. Below are the steps I have tried so far. Remote server is communicating with Indexer root@host1:/opt/splunkforwarder/etc/system/local# telnet host2 9997 Trying 10.20.30.40... Connected to host2 Escape character is '^]'. ^] telnet> quit Connection closed. Below is the content of outputs.conf           root@host1:/opt/splunkforwarder/etc/system/local# cat outputs.conf            [tcpout]            defaultGroup = splunk            [tcpout:splunk]             server = host2.ce.corp:9997 Below is the content of inputs.conf          root@host1:/opt/splunkforwarder/etc/system/local# cat inputs.conf          [default]          host = host1          [monitor:///var/log/messages]          disabled = false          sourcetype = web_haprx          index = webmethods_haprx Ran ./splunk list forward-server            root@host1:/opt/splunkforwarder/bin# ./splunk list forward-server            Your session is invalid. Please login.            Splunk username: admin            Password:            Active forwards:            host2:9997            Configured but inactive forwards:             None port 9997 is enabled on receiver  Also I did check splunk.log to see any error but no luck. Can you please help me to fix this issue? Regards, Rahul Gupta
Greetings!!!! Dear All, I really need your help and guidance,  I want to create a "Test environment " that is similar to the Live production , **What I want to have in the test environment , ... See more...
Greetings!!!! Dear All, I really need your help and guidance,  I want to create a "Test environment " that is similar to the Live production , **What I want to have in the test environment ,       -  In my production, i have 7 servers (one for Search Head, second for Search head management(splunk instance), and other 5 remaining are the indexers), AND I WANT TO HAVE THE SAME IN TEST ENVIRONMENT, I have read the splunk documentation, but It is not guide me well. I want your help and advice me, how to create this???? SO far I have downloaded Virtualbox and centos 7,  Kindly help me and guide me, what the requirements i need to have so that i can create this test environment same as the production one? - what splunk enterprise software i will use, is it the free 60 day or i will have to take copy the one i used in the production and use it in test env? I'm lost kindly help how i can have this distribution in environment , what i must have to successfully create this Test environment same as production one I mentioned above and have ALL those components in test environment, Thank you in advance.
Hello peeps, Does anyone know a better accelerator command that can help to correlate data? Im trying to correlate proxy server logs and AD logs.  Please see my base search; (index=proxy OR in... See more...
Hello peeps, Does anyone know a better accelerator command that can help to correlate data? Im trying to correlate proxy server logs and AD logs.  Please see my base search; (index=proxy OR index=ad) src_ip!="-" | transaction src_ip | eval MB=round(((bytes_in+bytes_out)/1024/1024),2) | stats sum(MB) as "Bandwidth", values(WorkstationName) as Hostname by src_ip | sort 10 - Bandwidth | rename src_ip as "Source IP" Please help me to sort out this issue. Thank you.