All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My task is format field "app" with relative fieldname How can I use format command to format as example: (app=*app1* OR  app=*app2* OR *app3* OR ...) please help me, thanks
I need a query for basic malware outbreak   Need query with server IP and server name from this raw logs.
Hello,  I've got a report that generates roughly 300k entries whenever it runs and want to append the results into a lookup table.   In a different post, it looked like lookup tables could hol... See more...
Hello,  I've got a report that generates roughly 300k entries whenever it runs and want to append the results into a lookup table.   In a different post, it looked like lookup tables could hold large amounts of rows, but I'm observing that there is a cap at 50,000 entries. Is it possible to remove this cap, and if so how?
I have multiple parts on my dashboard, how do I select mutiple parts at same time to move them all at once?
My current cluster-agent.yaml has appName repeated 3 times. I hope there is a way for setting a tiername such that on the report, I can separate the results by cluster, or even better, by namespace. ... See more...
My current cluster-agent.yaml has appName repeated 3 times. I hope there is a way for setting a tiername such that on the report, I can separate the results by cluster, or even better, by namespace. I can put the same k8s deployment in either different k8s namespaces (cheaper) or clusters, and need appd to report on them separately. ========= cluster-agent.yaml ======= spec: appName: "EPOCH-svc" image: "docker.io/appdynamics/cluster-agent:latest" serviceAccountName: appdynamics-cluster-agent nsToMonitor: [nodejs,dev1,dev2,perf1] stdoutLogging: "true" instrumentationMethod: Env nsToInstrumentRegex: nodejs defaultAppName: EPOCH-svc logLevel: "DEBUG" instrumentationRules: - namespaceRegex: nodejs instrumentContainer: select containerMatchString: epoch-awsmw-offerms-dcp language: nodejs appName: EPOCH-svc imageInfo: image: "docker.io/appdynamics/nodejs-agent:22.7.0-16-stretch-slim"
I am trying to search with specific date and time. Is it possible to search and compare? for example, i want to get stats from 2022-12-20 14:00:00 to  2022-12-20 15:00:00 and compare it with other ... See more...
I am trying to search with specific date and time. Is it possible to search and compare? for example, i want to get stats from 2022-12-20 14:00:00 to  2022-12-20 15:00:00 and compare it with other dates like 12/16, 12/10/, 12/5 with same time range. is there a way to get stats compared side by side with other dates  OR  just have the all mentioned dates and time (2p-3p) there in search query ?
Hi All, We are trying to setup CPU alerts for few servers and we are looking to throttle the alerts to reduce the noise. Option 1:  Trigger = Once Throttle = Checked  Suppress trigger fo... See more...
Hi All, We are trying to setup CPU alerts for few servers and we are looking to throttle the alerts to reduce the noise. Option 1:  Trigger = Once Throttle = Checked  Suppress trigger for = 4 hours If I select this option then suppose there is an issue for one host and alert is triggered. it won't generate another alert for 4 hrs. but I think we are going to miss if there is an issue with another host during that 4 hrs.  is it ? Option 2: Trigger = For Each Results Throttle = Checked  Suppress results containing field value = host Suppress trigger for = 4 hours If we choose this option issue is it there are 10 host it will generate 10 separate alert for each host.   Can some one guide what will be the better way to setup this alert ? Thanks
Context: I have an external client that uses Arctic Wolf for sysmon logs on their endpoints and need to ingest those logs into our Splunk environment. I'm not to install UFs on their endpoints. I kn... See more...
Context: I have an external client that uses Arctic Wolf for sysmon logs on their endpoints and need to ingest those logs into our Splunk environment. I'm not to install UFs on their endpoints. I know normally we'd want to install UFs on Windows endpoints and have sysmon logs sent to the indexers while utilizing Splunk's add-on for Sysmon for CIM compliance, extraction, etc. Since I can't take the normal approach, what's best practice for receiving a client's external sysmon logs (without installing UFs on their endpoints) and ingesting those logs?
Any updates as to when will InfraViz will be supported by AppDynamics for OpenShift 4.x clusters as it has moved container runtime default to CRI-O. Upstream K8 already removed dockershim. Any pla... See more...
Any updates as to when will InfraViz will be supported by AppDynamics for OpenShift 4.x clusters as it has moved container runtime default to CRI-O. Upstream K8 already removed dockershim. Any plans to support CRI-O?
I have a field called properties.requestbody.  I would like to have this field broken out based on the field and values paired.  I've tried with spath and no luck.  I've used and am using rename to e... See more...
I have a field called properties.requestbody.  I would like to have this field broken out based on the field and values paired.  I've tried with spath and no luck.  I've used and am using rename to extract the field / values in other parts of the logged events.  Not having luck with this field.  I think it has to do with the quotes but I'm not certain.  Thanks as always for the help and guidance. "properties": {"requestbody": "{\"properties\":{\"description\":\"Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to diskAccesses, data leakage risks are reduced. Learn more about private links at: https://aka.ms/disksprivatelinksdoc. \",\"displayName\":\"COMP-015N-Disk access resources should use private link-AuditIfNotExists-BUL\",\"metadata\":\"******\",\"mode\":\"Indexed\",\"parameters\":\"******\",\"policyRule\":\"******\",\"policyType\":\"Custom\"}}"
Hi,  I'm new to the regex, can someone please help me in regex to extract file name and file path separately in the data model.  Field value is variable in the fields file name and file path. Tha... See more...
Hi,  I'm new to the regex, can someone please help me in regex to extract file name and file path separately in the data model.  Field value is variable in the fields file name and file path. Thank you. Below is the sample data. "evidence": [{"entityType": "File", "evidenceCreation Time": "2022-12-19T10:43:56.51Z", "sha1": "336466254f9fe9b5a09f27848317525481dd5dd6", "sha256": "59de220b8d7961086e8d2d1fde61b71a810a32f78a9175f1f87ecacd692b85c9", "fileName": "Nero-8.1.1.0b_fra_trial.exe", "filePath": "F:\\Desktop new backup\\Musique \\Nero 8", "processId": null, "process CommandLine": null, "processCreation Time": null, "parentProcessId":
Disponemos de Splunk Cloud Victoria 9.0.2208.4 y hemos instalado y configurado: - Seguridad en la nube de Cisco  - Complemento de paraguas de seguridad en la nube de Cisco Seguimos los pasos pa... See more...
Disponemos de Splunk Cloud Victoria 9.0.2208.4 y hemos instalado y configurado: - Seguridad en la nube de Cisco  - Complemento de paraguas de seguridad en la nube de Cisco Seguimos los pasos para instalar pero no tenemos datos. Con esta consulta tenemos estos errores de registro index=_internal log_level=ERROR event_message="*paraguas*" 21-12-2022 11:24:02.878 + 0000 ERROR PersistentScript [ 25593 PersistentScriptIo ] - De { /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella- addon/bin/TA_cisco_cloud_security_umbrella_addon_rh_settings.py persistente } : archivo " /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/solnlib/credentials.py ", línea 133 , en obtener_contraseña 12-21-2022 11:24:02.878 +0000 ERROR PersistentScript [25593 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/TA_cisco_cloud_security_umbrella_addon_rh_settings.py persistent}: return func(*args, **kwargs) 12-21-2022 11:24:02.878 +0000 ERROR PersistentScript [25593 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/TA_cisco_cloud_security_umbrella_addon_rh_settings.py persistent}: File "/opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/solnlib/utils.py", line 128, in wrapper 12-21-2022 11:24:02.878 +0000 ERROR PersistentScript [25593 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/TA_cisco_cloud_security_umbrella_addon_rh_settings.py persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last 12-21-2022 11:24:02.749 +0000 ERROR PersistentScript [25593 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/TA_cisco_cloud_security_umbrella_addon_rh_settings.py persistent}: . 12-21-2022 11:24:02.749 +0000 ERROR PersistentScript [25593 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/TA_cisco_cloud_security_umbrella_addon_rh_settings.py persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#TA-cisco-cloud-security-paraguas - complemento # configs / conf - ta _ cisco _ nube _ seguridad _ paraguas _ complemento _ configuración , usuario = proxy .    
I am new to splunk and working on a complex query where; I am supposed to implement NOT IN functionality in SQL along with eval I want to skip all the IN-PROGRESS events who later went into COMPL... See more...
I am new to splunk and working on a complex query where; I am supposed to implement NOT IN functionality in SQL along with eval I want to skip all the IN-PROGRESS events who later went into COMPLETED state, and display all the events which are still in IN-PROGRESS state. For example COMPLETED events: event1 event5 event4 event7 IN-PROGRESS events: event3 event1 event4 Expected result event3 Given below are the queries to fetch COMPLETED and IN-PROGRESS events: index=abc message="*COMPLETED*" | eval splitStr=split(message, ",") | eval eventName=mvindex(splitStr,1) | table eventName index=abc message="*IN-PROGRESS*" | eval splitStr=split(message, ",") | eval eventName=mvindex(splitStr,1) | table eventName Thank you in advance.
Hello Splunk Community, I'm running a script using the splunk CLI to retrieve the required information. The script has previously been run multiple times without issue. I am now receiving the fol... See more...
Hello Splunk Community, I'm running a script using the splunk CLI to retrieve the required information. The script has previously been run multiple times without issue. I am now receiving the following error, but only for specific dates. FATAL: Invalid value "14/10/2022:2:0:00" for time term 'earliest' I can reproduce the problem in the graphical interface but if I change the date to '12/10/2022' the query is successful. Likewise, seaching for all logs for the date through the GUI returns the logs for the day. The script has already turned over the first 12 days of the month without error so the syntax is good, and the logs are indexed. Anyone have any ideas why I am receiving this error only for specific dates within the month? PS: Can also reproduce in a different month with the same dates. 12 returns results, 13 returns an error. Kind regards,
Hello Splunkers, I am currently having parsing problems with my Splunk Heavy Forwarder. I know I have heavy regex  that are causing Typing Queue problems, but I do not understand why "Splunk is not... See more...
Hello Splunkers, I am currently having parsing problems with my Splunk Heavy Forwarder. I know I have heavy regex  that are causing Typing Queue problems, but I do not understand why "Splunk is not taking more CPU" on my machine (CPU is always around 10-15%) Thanks a lot, GaetanVP  
Good evening, We are currently unable to connect to the following Splunk Cloud trial instance which shall expire next December 29th. Could you please investigate this issue?     15:51 $ curl ... See more...
Good evening, We are currently unable to connect to the following Splunk Cloud trial instance which shall expire next December 29th. Could you please investigate this issue?     15:51 $ curl -k -H "Authorization: Splunk a19b174b-9x9x-4e02-a83f-9999999999999" -v -d '{"index": "moacir-splunk-cloud-siem", "event": "blah blah blah","sourcetype": "_json" }' https://prd-p-ojiyn.splunkcloud.com:8088/services/collector/event * Trying 3.93.228.43:8088... * TCP_NODELAY set * connect to 3.93.228.43 port 8088 failed: Connection timed out * Failed to connect to prd-p-ojiyn.splunkcloud.com port 8088: Connection timed out * Closing connection 0 curl: (28) Failed to connect to prd-p-ojiyn.splunkcloud.com port 8088: Connection timed out     Warm regards,   Moacir
I have a table like this product_name test_result result_mv calc_output A 1 1 2 3 5 A 2 1 2 3 2 A 3 1 2 3 5 B 4 4 6 7 13 B 6 4 6 7 5 B 7 4 6 7 1... See more...
I have a table like this product_name test_result result_mv calc_output A 1 1 2 3 5 A 2 1 2 3 2 A 3 1 2 3 5 B 4 4 6 7 13 B 6 4 6 7 5 B 7 4 6 7 10   You can see thr MV field "result_mv". Is the outcome of   | eventstats list(test_result) by product_name And I have a customized func, for example: Σ( ( test_result - result_mv[index] ) ^2) Example of function output (calc_output): (1-1)^2 + (1-2)^2 + (1-3)^2 = 0+1+4 = 5 (2-1)^2 + (2-2)^2 + (2-3)^2 = 1+0+1 = 2 (3-1)^2 + (3-2)^2 + (3-3)^2 = 4+1+0 = 5 (4-4)^2 + (4-6)^2 + (4-7)^2 = 0+4+9 = 13 (6-4)^2 + (6-6)^2 + (6-7)^2 = 4+0+1 = 5 (7-4)^2 + (7-6)^2 + (7-7)^2 = 9+1+0 = 10 Bottom line, I need create the "calc_output" through "result_mv" by "product_name" .
Hi Everyone, I got a strange issue and unable to find a fix. All the indexes have a longer retention period but the oldest data is limited to 270 days. I checked the index cluster but did not fin... See more...
Hi Everyone, I got a strange issue and unable to find a fix. All the indexes have a longer retention period but the oldest data is limited to 270 days. I checked the index cluster but did not find anything which could be causing this issue. Here is the configuration for all indexes: [example1] coldPath = volume:primary/example1/colddb homePath = volume:primary/example1/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/example1/thaweddb frozenTimePeriodInSecs=39420043 Checked the index &  Indexers disk space and they are still space left for more data. Please let me know if anyone have similar experience or suggestion to increase the retention period. Thanks,
Dear splunkers, I would like to ask you that, I am looking for Splunk administration stuff any good source or website apart from splunk documentaion  Would be appropriate for your kind support 
Hi, I wanted to extract the field "login-first" and "delaccount" from result events. Following are 2 sample fields from the results logs. cf_app_name: AB123-login-first-pr cf_app_name: CD123-de... See more...
Hi, I wanted to extract the field "login-first" and "delaccount" from result events. Following are 2 sample fields from the results logs. cf_app_name: AB123-login-first-pr cf_app_name: CD123-delaccount-pr Sample query used : index=preprod source=logmon env="preprod" Please help me to extract the fields. Thanks in advance, SGL