All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to write a search that will compare data for the latest event with its previous event and show the difference if any for each host .I am trying to use earliest and latest event but I earl... See more...
I am trying to write a search that will compare data for the latest event with its previous event and show the difference if any for each host .I am trying to use earliest and latest event but I earliest doesnt take the immediate  preceding event  Following is the search I have tried but I dont think its right index=abc host=xyz | stats latest(id) as id  latest(SN) as SN latest(PN) as PN  latest(_time) as time by host |stats earliest(id) as eid   earliest(SN) as eSN  earliest(PN) as ePN   earliest(_time) as etime by host   Thanks in Advance Splunkers
Hello,  I need some help. I have a index where I pull all of the HR info for our employees then I have a CSV I bring in using the LOOKUP command, the CSV has al the of the machine info for each use... See more...
Hello,  I need some help. I have a index where I pull all of the HR info for our employees then I have a CSV I bring in using the LOOKUP command, the CSV has al the of the machine info for each user.  The CSV file has no Multi value fields. So if a user has multiple computers there would be a separate line in the CSV for that. what is happening when I run this code the splunk is creating Multi value field for all the machine info when a person has multiple computers.  I have tried a used MVexpand. But I have to do it on 14 different fields and then I get a Memory error. Which I can not increase the memory we have certain restrictions on that. Even if I do one MVexpand same memory error. the report will produce over 140000 entries. SO below is my basic code. that code will produce the multivalue fields which I do not want. I need the report when i user has multi machines it creates a completely separate line for that. this code does not include the MVexpand. Is there any way to do that without using mvexpand   (index=wss_desktop_os sourcetype="associate"LOGON_ID="*") LOCATION IN ("CA1*", "CA2*", "CA3*", "CA4*", "CA5*", "CA6*") | stats values(SPVR_FLL_NM) AS Supervisor, values(EMP_STA_TX) AS "Employee Status" values(SPVR_EMAIL_AD) AS "Supervisor Email", values(L2_LN1_AD) AS Address, values(L2_CTY_NM) AS City, values(SITE_COUNTRY) AS Country, values(DEPARTMENT) AS Department, values(DIV_LG_NM) AS Division, values(L2_FLR_NO) AS Floor, values(FLL_NM) AS FullName, values(LOCATION) AS Location, values(L2_CNY_CD) AS Region, values(L2_CNY_NM) AS SiteCountry, values(LOB) AS ORG, values(L2_STPV_NM) AS State, values(WRK_EMAIL_AD) AS Email by LOGON_ID | lookup local=true PrimaryUser.csv PrimaryUser AS LOGON_ID OUTPUT host AS host BuildNumber Cores DeviceType InstallDate LastBootUpTime LastReported Locale Manufacturer Model OSVer PCSystemType SerialNumber TotalPhysicalMemoryKB TotalVirtualMemoryKB | where host NOT null    
new splunk  i want to get syslog in splunk, should i install 3rd party app to get syslog? or any other way to get syslog from windows  i am using windows 10
Want to check if the ASA version 9.14 is supported for the "Splunk Add-on for Cisco ASA" 4.2 or 5.1??   It show Cisco ASA v9.12, v9.13, v9.16,v9.17. Is there a reason v9.14 is not shown or supported?
Hello, i need to de  delete some old logs on my cloud instance because i run out of space    is there any way to remove old logs with out using the delete comand on a search ? i need to ... See more...
Hello, i need to de  delete some old logs on my cloud instance because i run out of space    is there any way to remove old logs with out using the delete comand on a search ? i need to to remove this logs only by the web interface Thanks 
I have successfully implemented this thing in my personal account and it works fine with the local HEC -> splunk cloud,  but in my organization's environment local HEC gives a successful response cod... See more...
I have successfully implemented this thing in my personal account and it works fine with the local HEC -> splunk cloud,  but in my organization's environment local HEC gives a successful response code but the indexer fails to index the data because HEC couldn't send data to Splunk cloud indexers.    I noticed that I am not able to connect to the indexer's domain name or IP address on port 9997 do I need to explicitly allow any rule from Splunk cloud for indexers?
Hi Team, Is there a way to edit the KV store data? I can see some of the columns are hidden due to which I am not able to extract the data in the query.  Can anyone help me to troubleshoot this... See more...
Hi Team, Is there a way to edit the KV store data? I can see some of the columns are hidden due to which I am not able to extract the data in the query.  Can anyone help me to troubleshoot this issue? Thanks in advance.
Hi,   We have onboarded ping federate logs in splunk but we are getting multiple logs getting clubbed in one. Can someone help me why is this happening and how can I rectify this. Below is the lo... See more...
Hi,   We have onboarded ping federate logs in splunk but we are getting multiple logs getting clubbed in one. Can someone help me why is this happening and how can I rectify this. Below is the log sample. {"owner": "689186784177", "logGroup": "/aws/containerinsights/prod/application", "logStream": "pingaccess-was-admin-0_ping-cloud_pingaccess-was-admin", "logEvents": [{"id": "37013848036576111097453574573076270650086865051558412288", "timestamp": 1659758349197, "message": {"log": "<134>Aug 6 03:59:04 pingaccess-was-admin-0 , {\"thread\":\"ReqProc-StageThread[2]/LAUQHf9OalMaVcCzi173zw\",\"level\":\"INFO\",\"loggerName\":\"apiaudit\",\"message\":\"PA Audit with data \",\"endOfBatch\":false,\"loggerFqcn\":\"org.apache.logging.slf4j.Log4jLogger\",\"instant\":{\"epochSecond\":1659758344,\"nanoOfSecond\":121700000},\"threadId\":64,\"threadPriority\":5,\"date\":\"2022-08-06T03:59:04+0000\",\"exchangeId\":\"LAUQHf9OalMaVcCzi173zw\",\"roundTripMS\":\"1\",\"subject\":\"Administrator\",\"authMech\":\"Basic\",\"client\":\"127.0.0.1\",\"method\":\"GET\",\"resource\":\"* [] /pa-admin-api/v3 /version:-1\",\"requestUri\":\"/pa-admin-api/v3/version\",\"responseCode\":\"200\",\"logPurpose\":\"Audit_SIEM\"}\r\n", "stream": "stdout", "docker": {"container_id": "074987ed295dde1388b9e983bfe7bef9e7140a420a16070a42b57cc5a68ba0be"}, "kubernetes": {"container_name": "pingaccess-was-admin", "namespace_name": "ping-cloud", "pod_name": "pingaccess-was-admin-0", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingaccess-was:6.3.1-v1.0.19", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingaccess-was@sha256:f5d3fea0441c96815e1c89d108d9e57c3a8db0e3809a7c597a5000754a9b22ec", "pod_id": "1b0c2182-cdf3-44aa-aecb-26d9ed968169", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingaccess-was-cluster", "controller-revision-hash": "pingaccess-was-admin-66c966594f", "role": "pingaccess-was-admin", "statefulset_kubernetes_io/pod-name": "pingaccess-was-admin-0"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}}}, {"id": "37013848148102137835305220903896397731601320942966472705", "timestamp": 1659758354198, "message": {"log": "<134>Aug 6 03:59:09 pingaccess-was-admin-0 , {\"thread\":\"ReqProc-StageThread[4]/vbAQ-x_FAX3vzYPGAFJILA\",\"level\":\"INFO\",\"loggerName\":\"apiaudit\",\"message\":\"PA Audit with data \",\"endOfBatch\":false,\"loggerFqcn\":\"org.apache.logging.slf4j.Log4jLogger\",\"instant\":{\"epochSecond\":1659758349,\"nanoOfSecond\":145851000},\"threadId\":69,\"threadPriority\":5,\"date\":\"2022-08-06T03:59:09+0000\",\"exchangeId\":\"vbAQ-x_FAX3vzYPGAFJILA\",\"roundTripMS\":\"1\",\"subject\":\"Administrator\",\"authMech\":\"Basic\",\"client\":\"127.0.0.1\",\"method\":\"GET\",\"resource\":\"* [] /pa-admin-api/v3 /version:-1\",\"requestUri\":\"/pa-admin-api/v3/version\",\"responseCode\":\"200\",\"logPurpose\":\"Audit_SIEM\"}\r\n", "stream": "stdout", "docker": {"container_id": "074987ed295dde1388b9e983bfe7bef9e7140a420a16070a42b57cc5a68ba0be"}, "kubernetes": {"container_name": "pingaccess-was-admin", "namespace_name": "ping-cloud", "pod_name": "pingaccess-was-admin-0", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingaccess-was:6.3.1-v1.0.19", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingaccess-was@sha256:f5d3fea0441c96815e1c89d108d9e57c3a8db0e3809a7c597a5000754a9b22ec", "pod_id": "1b0c2182-cdf3-44aa-aecb-26d9ed968169", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingaccess-was-cluster", "controller-revision-hash": "pingaccess-was-admin-66c966594f", "role": "pingaccess-was-admin", "statefulset_kubernetes_io/pod-name": "pingaccess-was-admin-0"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}}}], "envType": "prod"}   {"owner": "689186784177", "logGroup": "/aws/containerinsights/prod/application", "logStream": "server_logs.pingfederate-1_ping-cloud_pingfederate", "logEvents": [{"id": "37013848024578310180644099321922695591001628886545793024", "timestamp": 1659758348659, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:02,867 tid:qf8Wb6tGdzy8PN4tl93rHepNMR0 DEBUG [org.sourceid.websso.servlet.IntegrationControllerServlet] GET: https://localhost:9031/pf/heartbeat.ping\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848024578310180644099321922695591001628886545793025", "timestamp": 1659758348659, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:02,868 tid:qf8Wb6tGdzy8PN4tl93rHepNMR0 DEBUG [org.sourceid.servlet.HttpServletRespProxy] flush cookies: adding Cookie{PF=hashedValue:qf8Wb6tGdzy8PN4tl93rHepNMR0; path=/; maxAge=-1; domain=null}\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848024578310180644099321922695591001628886545793026", "timestamp": 1659758348659, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:07,871 DEBUG [org.sourceid.util.log.internal.TrackingIdSupport] The incoming request does not contain a unique identifier. Assigning auto-generated request ID: OdFVgBFxp1JYNoiCQenljYo0I\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848085437043827434169875173670757059007436366348291", "timestamp": 1659758351388, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:07,871 DEBUG [org.sourceid.servlet.HttpServletRespProxy] adding lazy cookie Cookie{PF=hashedValue:VizAUYH0x9Lu7GbN_Rqv9fevZ7c; path=/; maxAge=-1; domain=null} replacing null\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848091480545776235968746529850408946713404487041028", "timestamp": 1659758351659, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:07,871 tid:VizAUYH0x9Lu7GbN_Rqv9fevZ7c DEBUG [org.sourceid.websso.servlet.IntegrationControllerServlet] GET: https://localhost:9031/pf/heartbeat.ping\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848091480545776235968746529850408946713404487041029", "timestamp": 1659758351659, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:07,871 tid:VizAUYH0x9Lu7GbN_Rqv9fevZ7c DEBUG [org.sourceid.servlet.HttpServletRespProxy] flush cookies: adding Cookie{PF=hashedValue:VizAUYH0x9Lu7GbN_Rqv9fevZ7c; path=/; maxAge=-1; domain=null}\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848091480545776235968746529850408946713404487041030", "timestamp": 1659758351659, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:11,388 DEBUG [org.sourceid.util.log.internal.TrackingIdSupport] The incoming request does not contain a unique identifier. Assigning auto-generated request ID: ssP8SFNvaJjisaEtFgH9aIN3q\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848118464447466458022747788069518851230826723344391", "timestamp": 1659758352869, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:11,388 DEBUG [org.sourceid.servlet.HttpServletRespProxy] adding lazy cookie Cookie{PF=hashedValue:hFl_yLtHqOydi68KkvReugURSyc; path=/; maxAge=-1; domain=null} replacing null\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}], "envType": "prod"}  
Hi here, I am trying to build a Splunk alert with Slack, to pass a table column of value as an array of value, eg.   Result Table =========== Field1 Field2 A1 B1 A2 B2... See more...
Hi here, I am trying to build a Splunk alert with Slack, to pass a table column of value as an array of value, eg.   Result Table =========== Field1 Field2 A1 B1 A2 B2   Expected Alert Message =========== Field1 : ["A1", "A2"]   I am currently referencing the following documentation, with the result token $result.Field1$. However, it shows only the value on the 1st row, ie. Field1 : A1. I wonder is it possible to have the alert message done, with an array of value instead ? Thanks in advance ! https://docs.splunk.com/Documentation/Splunk/8.2.1/Alert/EmailNotificationTokens  https://github.com/splunk/slack-alerts/issues/30 
*Note: Just sharing this issue with resolution. Issue: Splunk_TA_snow upgrade issue Description: There is a requirement to upgrade ServiceNow TA from 7.1.1 to 7.4.0. After the upgrade, we’ve noti... See more...
*Note: Just sharing this issue with resolution. Issue: Splunk_TA_snow upgrade issue Description: There is a requirement to upgrade ServiceNow TA from 7.1.1 to 7.4.0. After the upgrade, we’ve noticed that the “Configuration” section is not working (see screenshot below). Error found in /opt/splunk/var/log/splunk/splunkd.log: 07-29-2022 05:00:47.312 +0000 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 124, in wrapper\n    for name, data, acl in meth(self, *args, **kwargs):\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 345, in _format_all_response\n    self._encrypt_raw_credentials(cont["entry"])\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 375, in _encrypt_raw_credentials\n    change_list = rest_credentials.decrypt_all(data)\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/credentials.py", line 293, in decrypt_all\n    all_passwords = credential_manager._get_all_passwords()\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/utils.py", line 153, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/credentials.py", line 283, in _get_all_passwords\n    clear_password += field_clear[index]\nTypeError: can only concatenate str (not "NoneType") to str\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (mostrecent call last):\n  File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 113, in init_persistent\n    hand.execute(info)\n  File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 636, in execute\n    if self.requestedAction == ACTION_LIST:     self.handleList(confInfo)\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/admin_external.py", line 63, in wrapper\n    for entity in result:\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 131, in wrapper\n    raise RestError(500, traceback.format_exc())\nsplunktaucclib.rest_handler.error.RestError: REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 124, in wrapper\n    for name, data, acl in meth(self, *args, **kwargs):\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 345, in _format_all_response\n    self._encrypt_raw_credentials(cont["entry"])\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 375, in _encrypt_raw_credentials\n    change_list = rest_credentials.decrypt_all(data)\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/credentials.py", line 293, in decrypt_all\n    all_passwords = credential_manager._get_all_passwords()\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/utils.py", line 153, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/credentials.py", line 283, in _get_all_passwords\n    clear_password += field_clear[index]\nTypeError: can only concatenate str (not "NoneType") to str\n\n   Solution: Initiated fresh install of Splunk_TA_snow because upgrading the app gives the same error. We’ve found out there are conflict for the following apps (splunk_app_addon-builder, Splunk_TA_microsoft-cloudservices, TA-defender-atp-hunting). After removing these apps, Splunk_TA_snow is now working as expected.      
Well, there's no good section for this so I'll just post it here. I'm trying to do some drawings using the stencils from https://docs.splunk.com/Documentation/Community/current/community/Resources ... See more...
Well, there's no good section for this so I'll just post it here. I'm trying to do some drawings using the stencils from https://docs.splunk.com/Documentation/Community/current/community/Resources Aaaand the don't work very well with Office365 Visio. If I pull the icon to my drawing it's either completely filled with solid colour (the default option): Or if I change the fill to "none" and line colour to black I get a silly looking "hollow" icon. (on both screenshots it's the same icon of multiple indexers). Fiddling with styles doesn't help much either. Oh, and by default the caption text is not visible at all. So the question is - have you any experience-based hints on how to properly use the icons in visio? As a side rant - who thought it would be a good idea to edit caption text by going Group->Open->Edit text?
Hi all, I have just downloaded the app "SSL Certificate lookup" from splunk base and it's working fine. with following query: | makeresults | eval dest="myhost1, myhost2", dest = split(dest,","... See more...
Hi all, I have just downloaded the app "SSL Certificate lookup" from splunk base and it's working fine. with following query: | makeresults | eval dest="myhost1, myhost2", dest = split(dest,",") | lookup sslcert_lookup dest | eval dayleft=round(ssl_validity_window/86400) | table dest,dayleft, ssl_is_valid,ssl_issuer_common_name,ssl_self_issued,ssl_self_signed,ssl_version However, myhost1, myhost2 is hardcoded in the initial query and I would like to dynamically pass as parameters all host matching a specific query: index=* host=*myserver* I tried several things without success (subsearch, saved search, macro...), any idea how I could achieve that ?  Any help would be greatly appreciated !
Dear Splunkers,    We are using Splunk in a distributed environment with an SHC; now, what is the best approach to use Data inputs?  For example: can I create a TCP or UDP connection in one of ... See more...
Dear Splunkers,    We are using Splunk in a distributed environment with an SHC; now, what is the best approach to use Data inputs?  For example: can I create a TCP or UDP connection in one of the SH? and can I make an HEC input in an SHC environment? is this going to replicate to the remaining SHs?    Your help is very appreciated.
Hi All, I am appending two macros to generate the following result set using append command.  The 1st row comes from one macro while the 2nd row comes from the other.  Field rule_id is common in bot... See more...
Hi All, I am appending two macros to generate the following result set using append command.  The 1st row comes from one macro while the 2nd row comes from the other.  Field rule_id is common in both macro result set. How can i achieve the following  ? End goal is to show the same in Dashboard so i am looking to consolidate the data into one common row .   Any suggestions ?   I have tried using eval as recommended by @gcusello  in Solved: Merging events from two indexes - Splunk Community  but its not working out in my case. Desired Output: Triggered_time Acknowledged_time difference rule_id 2022-08-03 23:27:13 2022-08-03 23:28:37 00:01:24.9021888 xxxxx
Hi Team, Can we monitor the lookup files i.e from updates prospective who updates what in a lookup file or even in a KV store. This is one of the requirements of monitoring so that if tomorrow some... See more...
Hi Team, Can we monitor the lookup files i.e from updates prospective who updates what in a lookup file or even in a KV store. This is one of the requirements of monitoring so that if tomorrow something needed; we can backtrack and able to answer who; what and when. Thanks in advance.
I have a search like this: sourcetype = Grandstream  | stats count by _time phone starttime answer endtime result: _time phone starttime answer endtime count 2022-08-09 14:30:42 xxx39... See more...
I have a search like this: sourcetype = Grandstream  | stats count by _time phone starttime answer endtime result: _time phone starttime answer endtime count 2022-08-09 14:30:42 xxx39xxxx 2022-08-04 14:33:58 2022-08-04 14:34:02 2022-08-04 14:34:02 1 2022-08-09 14:30:42 xxx394xxxx 2022-08-04 14:34:02 2022-08-04 14:34:02 2022-08-04 14:34:02 1 2022-08-09 14:30:42 xxx1394xxx 2022-08-04 14:34:03 2022-08-04 14:34:03 2022-08-04 14:34:09 1 2022-08-09 14:30:42 xxx1382xx 2022-08-09 14:28:52 2022-08-09 14:28:52 2022-08-09 14:29:25 1 But _time and starttime don't match because the log time is pushed wrong is there a way to filter the starttime field by its time in a week from 0h Friday to 24h Thursday? thanks 
Hi, I have a line in the event like "/v1/locations/7b-cec6-4820-b699-ec"  I need to extract  7b-cec6-4820-b699-ec, or which ever comes after  /v1/locations/ and before a " Please help on the ... See more...
Hi, I have a line in the event like "/v1/locations/7b-cec6-4820-b699-ec"  I need to extract  7b-cec6-4820-b699-ec, or which ever comes after  /v1/locations/ and before a " Please help on the same. Thank you.
Does Rex in splunk support variable in regular expression ? For example,   user could input a text from UI, usually I need  a variable like $kw$  to get the input from user,  and  use $kw$  in rex co... See more...
Does Rex in splunk support variable in regular expression ? For example,   user could input a text from UI, usually I need  a variable like $kw$  to get the input from user,  and  use $kw$  in rex command  , Can splunk support this way ? and how ?  Thanks.
Hi Splunkers, I will planning entegration splunk on our aws envirement but I m beginner on aws so please could you help me about AWS sourcetype details and let me know which are required for securi... See more...
Hi Splunkers, I will planning entegration splunk on our aws envirement but I m beginner on aws so please could you help me about AWS sourcetype details and let me know which are required for security perspective ? And if u have usescases about security please share with me.
Hi Team, Good day! Just wanted to check if you can share with me the link for Older version of Splunk Enterprise/UF installers: Splunk Enterprise version v6.6.2 (Linux 32 and 64 bit) Splunk E... See more...
Hi Team, Good day! Just wanted to check if you can share with me the link for Older version of Splunk Enterprise/UF installers: Splunk Enterprise version v6.6.2 (Linux 32 and 64 bit) Splunk Enterprise version v7.1.10  (Linux 32 and 64 bit) Splunk Enterprise version v8.0.10  (Linux 32 and 64 bit)   Splunk UF version v6.6.2 (Linux 32 and 64 bit) Splunk UF version v7.1.10  (Linux 32 and 64 bit) Splunk UF version v8.0.10  (Linux 32 and 64 bit) Also, the final target version is Splunk Enterprise version is 8.2.7. Can you confirm if which order below would work? Option 1: Splunk v6.6.2 ->upgrade to Splunk v7.1.10  -->upgrade to Splunk v8.0.10 -->upgrade to v8.2.7 Option 2: Splunk v6.6.2 ->upgrade to Splunk v7.1.10  -->upgrade to v8.2.7 (Can we upgrade direct from v7.1.10 to v8.2.7?) We are planning to upgrade some very old Splunk instances and we wanted to simulate first on our test environment. We are looking forward to your assistance on this. Thank you.