All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi. My project is using twistlock and dependency check. Needless to say, we are always updating appd agent dependencies to get past recent vulnerabilities. I would like to know what folder are not ... See more...
Hi. My project is using twistlock and dependency check. Needless to say, we are always updating appd agent dependencies to get past recent vulnerabilities. I would like to know what folder are not required for the appd javaagent, currently this is almost 300mb. We are bundling this in a docker base image for all out microservices. Does appd agent really need all these libraries for runtime. . ├── conf │   ├── jmx │   │   └── servers │   └── logging ├── external-services │   ├── agent-self-monitoring │   ├── analytics │   ├── argentoDynamicService │   │   └── argento-security-extension │   │   ├── lib │   │   └── tenants │   │   └── argento │   │   ├── config │   │   └── lib │   └── netviz │   └── lib ├── lib │   └── tp ├── multi-release ├── sdk │   └── src │   ├── docs │   │   └── com │   │   └── appdynamics │   │   └── sample │   │   ├── exitcall │   │   │   ├── async │   │   │   └── sync │   │   └── multithread │   └── java │   └── com │   └── appdynamics │   └── sample │   ├── exitcall │   │   ├── async │   │   └── sync │   └── multithread └── utils ├── keystorereader ├── scs └── verifier
I want to calculate the volume of traffic ( FortiGate firewall) ;   I wrote this query I don't know if it's correct or not. "index=<my index> sourcetype=<my_sourcetype>  | eval TotalTraffic_GB ... See more...
I want to calculate the volume of traffic ( FortiGate firewall) ;   I wrote this query I don't know if it's correct or not. "index=<my index> sourcetype=<my_sourcetype>  | eval TotalTraffic_GB = (sum(bytes)/1000000000)| stats sum(TotalTraffic_GB) as TotalGigaBytes , avg(TotalTraffic_GB) as AvgGigaBytes, max(TotalTraffic_GB) as MaxGigaBytes" PLEASE CAN ANY ONE HELP ME ?? 
I have a script that creates a custom log file to gather all Splunk certs and uses openssl to print out all of the details about each cert (.pem files). This log file is then tracked as a data input ... See more...
I have a script that creates a custom log file to gather all Splunk certs and uses openssl to print out all of the details about each cert (.pem files). This log file is then tracked as a data input to get the data into Splunk and is part of a custom app. The log file has entries that look like the following: Date: 2020-01-01 Path: /opt/splunk/etc/auth/cert1.pem Certificate:     Data:         Version: 1 (0x0) Date: 2020-01-01 01:02:03.555 Path: /opt/splunk/etc/auth/cert2.pem Certificate:     Data:         Version: 3 (0x2) Date: 2020-01-01 01:02:03.555 Path: /opt/splunk/etc/auth/cert3.pem Certificate:     Data:         Version: 3 (0x2) I have setup a new index and am getting the data into Splunk so that it is searchable but I have not been able to quite get my app's local/props.conf dialed in perfectly. Here is my current props.conf: #BREAK_ONLY_BEFORE_DATE = true DATETIME_CONFIG = LINE_BREAKER = ([\n\r]+)^Date:\s\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3} NO_BINARY_CHECK = true #SHOULD_LINEMERGE = false SHOULD_LINEMERGE = true #BREAK_ONLY_BEFORE = ([\n\r]+)^Date:\s\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3} category = Custom description = Format custom logfile with decoded PEM certificate information for Splunk servers. pulldown_type = 1 disabled = false MAX_TIMESTAMP_LOOKAHEAD = 23 TIME_FORMAT = %Y-%m-%d %H:%M:%S.%3N TIME_PREFIX = ^Date:\s TZ = GMT The issue I'm seeing right now is that Splunk is breaking the first two lines of each entry in the log file (containing Date and Path) into an event and then the rest of the information starting with Certificate until it hits the next Date entry as another event.  So for each entry in the log I get the following two events: Event #1: Date & Path lines Event #2: All remaining lines starting with Certificate until, but not including, the next occurrence of Date Ideally, I would like Splunk to have events start at Date and end at the line before the next occurrence of Date. The weird thing is with the current props.conf configuration, I did get one event to be parsed correctly. I am continuing to modify this and will report back if I do resolve this, but any help in parallel is appreciated. This is my first time going through this exercise. Thanks!
we have data written in MST and data is indexed without any issue. Splunk servers are in CST as well forwarder in CST timezone and we are getting indexed time in CST which is one hour behind of MST... See more...
we have data written in MST and data is indexed without any issue. Splunk servers are in CST as well forwarder in CST timezone and we are getting indexed time in CST which is one hour behind of MST so team wants to be indexed in MST rather than in CST. Challenge here is we can't change any TZ for both splunk as well forwarder servers.  Currently no props applied and tried applying US/Mountain but didn't worked. Any suggestions to make indexed time in MST.  [5/5/23 9:33:50:997 MST] 0000000 SystemOut [5/5/23 9:33:50:994 MST] 0000000 SystemOut O ** ACCESS
Hello, I have a usecase where few servers stopped ingesting for 3-4 hrs when the user is doing performance testing on those servers and then servers started ingesting again automatically, I am not ... See more...
Hello, I have a usecase where few servers stopped ingesting for 3-4 hrs when the user is doing performance testing on those servers and then servers started ingesting again automatically, I am not sure what caused the ingestion to stop. During the time when the ingestion has stopped, logs are still available on the server Please help me with the troubleshooting on what might have caused for this issue and how I can remediate this?   Thanks in advance  
Hi, Types of forwarder deployment ?
I'm attempting to chart a maximum duration by server and event_type, and I'd like to display the duration in HH:MM:SS format rather than a number of seconds. However, fieldformat doesn't seem to be a... See more...
I'm attempting to chart a maximum duration by server and event_type, and I'd like to display the duration in HH:MM:SS format rather than a number of seconds. However, fieldformat doesn't seem to be applying the change to the assigned duration field. Is there a way to do this? Here's the command:     index=s3batchtest eventcode Open | extract pairdelim="," kvdelim="=" | eval bDate=strptime(beginDate,"%Y-%m-%d %H:%M:%S") | eval lDate=strptime(lastDate,"%Y-%m-%d %H:%M:%S") | eval eventAge=lDate - bDate | chart max(eventAge) AS eventDuration by server eventCode limit=0 | fieldformat eventDuration=toString(eventDuration, "duration")      
Hi I am using the below query and i need the results in hourly basis for the time i selected ?   "My Base search"   | fields TRAN_TIME_MS PAGE_ID PAGE_TITLE _time| eventstats perc99(TRAN_TIME_MS)... See more...
Hi I am using the below query and i need the results in hourly basis for the time i selected ?   "My Base search"   | fields TRAN_TIME_MS PAGE_ID PAGE_TITLE _time| eventstats perc99(TRAN_TIME_MS) as Percentile by PAGE_ID | where TRAN_TIME_MS <= Percentile | stats count avg(TRAN_TIME_MS) as avg_time max(TRAN_TIME_MS) as max_time by PAGE_ID,PAGE_TITLE, | eval avg_time=round(avg_time/1000,2) | eval max_time=round(max_time/1000,2) | rename count as Total_Requests avg_time as Average(Seconds) max_time as Max_Time(Seconds) PAGE_ID as Page_ID PAGE_TITLE as Page_Description  
Hi All, Good Day. Need help in Splunk data receiving. We have Avamar backup node which is sending the data to splunk is in EST time zone. The splunk server is configured with the UTC time zon... See more...
Hi All, Good Day. Need help in Splunk data receiving. We have Avamar backup node which is sending the data to splunk is in EST time zone. The splunk server is configured with the UTC time zone. The data is being received by splunk which shows the correct time but when the index server parsing the data, thinking it was 4 hours old data and ignoring it. So backup failures are not captured and ticket is not generating for backup failures. Upon verifiying in splunk side, some of the received data between _time & indextime difference of 4 hours and some of them receving correctly. When the difference time is 4 hours splunk is ignoring the data and not generating ticket for failures. Note: The search is running for every 15 minutes if we increase the search duration to see last 4 hours then we will receive lot of duplicates. We have contacted Dell support but they are updating me that backup application just send the data with the MIB file as it when receives the data. It would be the splunk to process the data correctly. Please help to solve the issue and any recommendation is appreciated.      
I try to show all the value in Spluk dashoard .   I have this kind of data   { returnCode= 2,  itemCount=35, cdt=4 , list[{ ip=23455er, extId=589358, ipl=23, ibd=500} {ip=34555de, extId=456633, ipl... See more...
I try to show all the value in Spluk dashoard .   I have this kind of data   { returnCode= 2,  itemCount=35, cdt=4 , list[{ ip=23455er, extId=589358, ipl=23, ibd=500} {ip=34555de, extId=456633, ipl=100, ibd=all} { ip=246789cd, extId= 3859095 ,ipl =45,ibd=300}] I want display this in splunk dashboard like this :        return Code           itemCount  ,                                                                 ip                                                                                                                                                                   2                                   35               23455er,34555de,246789cd,  extId                                                                      ibd      589358, 456633,3859095                    500, all, 300                                                              
Hi, I believe I should be able to use Splunk HTTP Event Collector to send events to Splunk.  I have created an Event Collector token, I have tried a bunch of different URLs, none seem to work.  I h... See more...
Hi, I believe I should be able to use Splunk HTTP Event Collector to send events to Splunk.  I have created an Event Collector token, I have tried a bunch of different URLs, none seem to work.  I have tried Curl and Postman.  For a free account if my URL is https://prd-p-m95xx.splunkcloud.com,  what would be my URL? Here's my curl command  curl "MysteryURL" -H "Authorization: Splunk 11111111-2222-3333-4444-555555555555" -d "{\"event\": \"Hello, Anyone!\", \"sourcetype\": \"manual\"}" -v Thanks
I have the following events  <190>May 4 20:20:36 data.test.com 1,2023/05/04 20:20:35,013001101002958,test,end,2305,2023/05/04 I want to remove everything before the second comma (including the  c... See more...
I have the following events  <190>May 4 20:20:36 data.test.com 1,2023/05/04 20:20:35,013001101002958,test,end,2305,2023/05/04 I want to remove everything before the second comma (including the  comma) Since i dont want it to be indexed , im using the props and transforms on my HF to do that . My regex seems to work but when i try to implement it ,it does not filter anything  props.conf [source::/var/log/splunk/IP/syslog.log] TRANSFORMS-null = remove_before_comma transforms.conf [remove_before_comma] REGEX = ^([^,]*,[^,]*), DEST_KEY = queue FORMAT = nullQueue Here is the regex  https://regex101.com/r/Lxqgue/1 Any idea why this is not working properly  Thanks 
Dear Community members, Splunk DB connect on my Splunk Indexer v 9.0.1 is unable to start the Task Server. Based on suggestion related to the error, i looked into the suggestions from many communit... See more...
Dear Community members, Splunk DB connect on my Splunk Indexer v 9.0.1 is unable to start the Task Server. Based on suggestion related to the error, i looked into the suggestions from many community members and tried most of the them but to no avail. I backed up the configurations and even removed the app and installed it again. Reinstalled JDK and updated the environment variable on the server with correct path for JAVA_HOME and Path variable. However, when trying to do the set up for Splunk DB Connect, i see the error "Cannot communicate with task server, please check your settings". When checking on the Configuration>Settings and try to save the settings with JRE Installation, Splunk tries to start the  Task Server but after a minute it times out with error "Failed to start Task Server". There is not much information on the DB Connect log but on the splunkd logs, i see the following error but unable to understand what could be the reason. We are not using mongodb as a DB and only SQL Server DB is in use to which we wish to connect to. Below the snippet of error from splunkd as it seems to be the root cause of Task Server not starting up. \windows_x86_64\bin\dbxquery.exe" action=start_dbxquery_server, configFile=D:\Apps\Splunk/etc/apps/splunk_app_db_connect/config/dbxquery_server.yml \windows_x86_64\bin\server.exe" action=start_task_server, configFile=D:\Apps\Splunk/etc/apps/splunk_app_db_connect/config/dbx_task_server.yml \windows_x86_64\bin\dbxquery.exe" 17:35:11.475 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - initializing secret kv store collection \windows_x86_64\bin\dbxquery.exe" 17:35:11.553 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - secret KV Store not found, creating \windows_x86_64\bin\dbxquery.exe" action=dbxquery_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator. stack=com.splunk.HttpException.create(HttpException.java:84)\\com.splunk.DBXService.sendImpl(DBXService.java:140)\\com.splunk.DBXService.send(DBXService.java:52)\\com.splunk.HttpService.post(HttpService.java:387)\\com.splunk.EntityCollection.create(EntityCollection.java:95)\\com.splunk.EntityCollection.create(EntityCollection.java:83)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initialize(SecurityFileGenerationUtil.java:245)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initEncryption(SecurityFileGenerationUtil.java:49)\\com.splunk.dbx.command.DbxQueryServerStart.startDbxQueryServer(DbxQueryServerStart.java:82)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:50)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ \windows_x86_64\bin\dbxquery.exe" com.splunk.modularinput.MalformedDataException: Events must have at least the data field set to be written to XML. \windows_x86_64\bin\dbxquery.exe" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:51)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ \windows_x86_64\bin\server.exe" 17:35:11.694 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - initializing secret kv store collection \windows_x86_64\bin\server.exe" 17:35:11.741 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - secret KV Store not found, creating \windows_x86_64\bin\server.exe" action=task_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator. stack=com.splunk.HttpException.create(HttpException.java:84)\\com.splunk.DBXService.sendImpl(DBXService.java:140)\\com.splunk.DBXService.send(DBXService.java:52)\\com.splunk.HttpService.post(HttpService.java:387)\\com.splunk.EntityCollection.create(EntityCollection.java:95)\\com.splunk.EntityCollection.create(EntityCollection.java:83)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initialize(SecurityFileGenerationUtil.java:245)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initEncryption(SecurityFileGenerationUtil.java:49)\\com.splunk.dbx.server.bootstrap.TaskServerStart.startTaskServer(TaskServerStart.java:108)\\com.splunk.dbx.server.bootstrap.TaskServerStart.streamEvents(TaskServerStart.java:69)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:145)\\ \windows_x86_64\bin\server.exe" com.splunk.modularinput.MalformedDataException: Events must have at least the data field set to be written to XML. \windows_x86_64\bin\server.exe" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.dbx.server.bootstrap.TaskServerStart.streamEvents(TaskServerStart.java:74)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:145)\\ \windows_x86_64\bin\server.exe" action=start_task_server, configFile=D:\Apps\Splunk/etc/apps/splunk_app_db_connect/config/dbx_task_server.yml Thanks for your help!
I have a Splunk search outputs result as follows.  Details link Product Details : Product 1:- ABC123 Product 2:- DEF456 abcd_website   Now how do I combine both the fields into 1 as... See more...
I have a Splunk search outputs result as follows.  Details link Product Details : Product 1:- ABC123 Product 2:- DEF456 abcd_website   Now how do I combine both the fields into 1 as follows  Details link Product Details : Product 1:- ABC123 link:- abcd_website Product 2:- DEF456 abcd_website The below eval condition giving me the result as follows      | eval Details = Details + link     Details link Product Details : Product 1:- ABC123 Product 2:- DEF456 link:- abcd_website abcd_website   I do not want to add that link at the end. but wanted that somewhere in the middle after a specific field. Also, I do not want to touch or edit the Details field although thats an easy way but it comes from a macro and which used by many searches. I am looking for an alternate way, so that I can update the Details for a specific search?
Is there a way to pass a parameter to a report when calling it via -    curl -u user:password -k https://<api_server>:8089/servicesNS/nobody/<app_name>/search/jobs -d "search=savedsearch <sav... See more...
Is there a way to pass a parameter to a report when calling it via -    curl -u user:password -k https://<api_server>:8089/servicesNS/nobody/<app_name>/search/jobs -d "search=savedsearch <savedsearch_name>" -d exec_mode=oneshot -d count=10000  
I have a Splunk search outputs result as follows.  Details link Product Details : Product 1:- ABC123 Product 2:- DEF456 abcd_website   Now how do I combine both the fields into 1 as... See more...
I have a Splunk search outputs result as follows.  Details link Product Details : Product 1:- ABC123 Product 2:- DEF456 abcd_website   Now how do I combine both the fields into 1 as follows  Details link Product Details : Product 1:- ABC123 link:- abcd_website Product 2:- DEF456 abcd_website   The below eval condition giving me the result as follows    | eval Details = Details + link     Details link Product Details : Product 1:- ABC123 Product 2:- DEF456 link:- abcd_website abcd_website   I do not want to add that link at the end. but wanted that somewhere in the middle after a specific field. Also, I do not want to touch or edit the Details field although thats an easy way but it comes from a macro and which used by many searches. I am looking for an alternate way, so that I can update the Details for a specific search?
Hi all, I am confident with strptime/strftime but i'm really struggling with the correct strptime argument for the following date/time format - 2023-01-25T21:32:04:501+0000 The T between date a... See more...
Hi all, I am confident with strptime/strftime but i'm really struggling with the correct strptime argument for the following date/time format - 2023-01-25T21:32:04:501+0000 The T between date and time is causing me issues. Thank you in advance!
Can any one help me with regex to fetch value after last "/"   Thanks
Hello all, AppDynamics provides multiple dashboards for SAP and ICM Monitor dashboard is one of those. Under ICM monitor STRUST certificate expiration is present which displays the list of certifica... See more...
Hello all, AppDynamics provides multiple dashboards for SAP and ICM Monitor dashboard is one of those. Under ICM monitor STRUST certificate expiration is present which displays the list of certificate identities from STRUST where a certificate will soon expire, or a certificate has already expired, and this dashboard shows statistics from each application server of the SAP system.  So, my question is does it also covers the SAP PI and PO also? If no, then is there any way to do the same?  Thanks & Regards
Hai Team how to ingest data from a global website data if APi is there