All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a group of 'filters' in a dashboard that I can add to or remove from. A filter comprises 3 fields Dropdown containing field names Selector indicating comparison type Data containing va... See more...
I have a group of 'filters' in a dashboard that I can add to or remove from. A filter comprises 3 fields Dropdown containing field names Selector indicating comparison type Data containing value I want a SINGLE panel where these 3 fields are displayed horizontally - simple I'm looking for CSS to solve this particular problem: I want to be able to add a new filter set on a separate line below the first one, in the same panel, not on a new row. This is the single group, but I want to have other groups of inputs that are conditionally show using token depends. Normally inputs will travel horizontally across the screen, so the 4th input would then be to the rights of the data_f_1 field below. How can I get the CSS to force any input with id of f_* to break to a new line, so it's left aligned in the panel?   <row id="filter_selector_row_1" depends="$f_1$"> <panel> <input id="f_1" depends="$f_1$" type="dropdown" token="field_f_1" searchWhenChanged="true"> <label>Field</label> <fieldForLabel>label_name</fieldForLabel> <fieldForValue>field_name</fieldForValue> <search base="object_list"> <query></query> </search> <change> <eval token="filter_1">$field_f_1$$selector_f_1$"*$data_f_1$*"</eval> </change> </input> <input depends="$f_1$" type="dropdown" token="selector_f_1" searchWhenChanged="true"> <label>Selector</label> <choice value="=">Equals</choice> <choice value="!=">Not Equals</choice> <initialValue>=</initialValue> <change> <eval token="filter_1">$field_f_1$$selector_f_1$"*$data_f_1$*"</eval> </change> </input> <input depends="$f_1$" type="text" token="data_f_1" searchWhenChanged="true"> <label>Data</label> <change> <set token="filter_1">$field_f_1$$selector_f_1$"*$data_f_1$*"</set> </change> </input> </panel> </row>    
I have ack enabled for a HEC input. I can successfully send data into splunk with guid #1. With the same curl but a different guid #2 the data is not in splunk but the response is Success and the ack... See more...
I have ack enabled for a HEC input. I can successfully send data into splunk with guid #1. With the same curl but a different guid #2 the data is not in splunk but the response is Success and the ack Id request also returns true. I looked in splunkd.log and I see this: 08-09-2022 17:10:04.755 -0700 ERROR JsonLineBreaker [49865490 parsing] - JSON StreamId:0 had parsing error:Unexpected character while looking for value: 'c' - data_source="http:dev", data_host="host:8088", data_sourcetype="sourcetype_name"
Hi guys, I have a query that works and gives me table such as below.   What I wanted to do was when count of values in Field1 and Field2 is greater than 1, exclude it.  In other words, if combina... See more...
Hi guys, I have a query that works and gives me table such as below.   What I wanted to do was when count of values in Field1 and Field2 is greater than 1, exclude it.  In other words, if combination of svchost and services.exe is seen more than once, exclude it.  In this case, we see it twice, so we want to exclude it from results?   How could I do this? I tried but I am not getting my head around this one.   Thanks for your help in advance.     Field1 Field2 Field3 svchost services.exe c:\windows\system32 rdp.exe cmd.exe c:\windows\system32 svchost services.exe c:\windows\system32 wmic.exe powershell.exe c:\windows\system32
Hello, I'm trying to  pull the latest values for every 4 hours in a day ie., latest values between the time 00:00:00 to 04:00:00, 04:00:00 to 08:00:00, 08:00:00 to 12:00:000, 12:00:00 to 16:00:00.... See more...
Hello, I'm trying to  pull the latest values for every 4 hours in a day ie., latest values between the time 00:00:00 to 04:00:00, 04:00:00 to 08:00:00, 08:00:00 to 12:00:000, 12:00:00 to 16:00:00.... Below is the example of how the data looks like. TIA  
I am attempting to build a search that pulls back all logs that have a value in a multi-value field but do not have other values. With a few values I do not care if exist or not. To break it down ... See more...
I am attempting to build a search that pulls back all logs that have a value in a multi-value field but do not have other values. With a few values I do not care if exist or not. To break it down more. The field "names" must have "bob". The field "names" can have any or all "tom","dan","harry" but is not required to have them. The field "names" cannot have any other value. I do not have a full list of the names and they can change over time so it is not possible to make a list of the "names" I do not want. I need other values from the logs just filtering by the "names" field as an example. As a few examples: "bob"= returned in the search "bob","tom" = returned in the search "tom","dan" = not returned in the search "bob","sam" = not returned in the search "bob","harry","fred" = not returned in search I am having trouble figuring out what to use to exclude multi-value fields in this way.
Hello Splunk community, I'm testing Splunk Timeline - Custom Visualization plugin. I'd like to visualize distributed tracing in my microservices architecture, like app name, request URL, response c... See more...
Hello Splunk community, I'm testing Splunk Timeline - Custom Visualization plugin. I'd like to visualize distributed tracing in my microservices architecture, like app name, request URL, response codes, etc.  According to the documentation , I can specify only one resource field. How can use more fields? I also found Splunk .conf19 presentation and the 19th slide looks promising: Unfortunately, mentioned presentation doesn't provide us with implementation details. Could you point me in the right direction to achieve the same, please? Any help would be much appreciated. Thanks!  
Hello, trying to create visualization that will show results from KV_Store used as filter and then query index. Basically. 1) KV Store DB -> for example: Assets (hostname, ip,  key_id, ...). used... See more...
Hello, trying to create visualization that will show results from KV_Store used as filter and then query index. Basically. 1) KV Store DB -> for example: Assets (hostname, ip,  key_id, ...). used as inputlookup -> this is much faster and can be populated from multiple index-es easier (also solve JOIN 50k limit). 2) Search index last 7 days that holds 200k+ results, index should be be filtered by key_id (returned from KV Store, KV store can be filtered much more granular from key_id than index that we wanna query later as it does not hold some fields that we wanna filter by). Query execute and kv_store return key_id that should be passed as filter to index search. What is the best way to filter based on two searches in big data sets (every data set is 50k+ events). currently using (filter example with * so it can be 1 or 50k+ key_id's) index=test [|inputlookup kv_store_lookup where filter=* | fields key_id ] this search works well when I have filter with 10, 20, 50 key_id's (got results in a matter of second), when it's "*" with 10k+ key id's then it's a little slow (10 seconds+) . Is there "some better way" or my queries are good that will be Visualization search combined from two searches where first search returns key_id's that second search should use.
Client Error Error Results Error ResultsPrevious week Percent of Total PercentDifference abc 1003 2 0 12.5 0 ab... See more...
Client Error Error Results Error ResultsPrevious week Percent of Total PercentDifference abc 1003 2 0 12.5 0 abc 1003 3   12.5 0 abc 1013 1 2 342 -50 abc 1027 3 3 5 0 abc 1027 5 xyz 43 zyz abc 1013 2 zyz 432 et abc Total 16 zyds 423 tert   My code is   --    | bucket _time span=1w | stats count as Result by LicenseKey, Error_Code | eval Client=coalesce(Client,LicenseKey) | eventstats sum(Result) as Total by Client | eval PercentOfTotal = round((Result/Total)*100,3) | sort - _time | streamstats current=f latest(Result) as Result_Prev by LicenseKey | eval PercentDifference = round(((Result/Result_Prev)-1)*100,2) | fillnull value="0" | append [ search index=abc sourcetype=yxx source= bff ErrorCode!=0 | `DedupDHI` | lookup abc LicenseKey OUTPUT Client | eval Client=coalesce(Client,LicenseKey) | stats count as Result by Client | eval ErrorCode="Total", PercentOfTotal=100] | lookup xyz_ErrorCodes ErrorCode OUTPUT Description | lookup uyz LicenseKey OUTPUT Client | eval Client=coalesce(Client,LicenseKey) | eval Error=if(ErrorCode!="Total", ErrorCode+" ("+coalesce(Description,"Description Missing - Update xyz_ErrorCodes")+")", ErrorCode) | fields Client, Error, Result, PercentOfTotal, PercentDifference, Error results previous week | sort CustomerName, Error, PercentDifference   Still not able to figure out the duplicate row issue, single row for one each error combined with total. any suggestions please? 
I want to extract package line as individual results, tried rex "Linux\ssystem\s\:\s+(?<packages>.+)", but that is just extracting the first package line.  tried rex "Linux\ssystem\s\:\s+(?<package... See more...
I want to extract package line as individual results, tried rex "Linux\ssystem\s\:\s+(?<packages>.+)", but that is just extracting the first package line.  tried rex "Linux\ssystem\s\:\s+(?<packages>(.+\w{1,3}\s\w{1,3}(\s+)?\d{1,2}\s\d{1,2}\:\d{1,2}\:\d{1,2}\s\d{4})", but same first line.   Here is the list of packages installed on the remote CentOS Linux system : python-prettytable-0.7.2-3.el7|(none) Wed Jan 9 20:38:03 2019 gettext-0.19.8.1-3.el7|(none) Wed May 13 07:35:27 2020 cpp-4.8.5-44.el7|(none) Tue Feb 2 09:59:27 2021 kmod-20-28.el7|(none) Tue Feb 2 09:59:31 2021 glibc-2.17-324.el7_9|(none) Wed Mar 16 18:10:11 2022 diffutils-3.3-5.el7|(none) Tue Feb 2 09:59:00 2021 elfutils-default-yama-scope-0.176-5.el7|(none) Tue Feb 2 09:59:35 2021 glibc-2.17-324.el7_9|(none) Wed Mar 16 18:10:12 2022 numactl-libs-2.0.12-5.el7|(none) Tue Feb 2 09:59:02 2021 device-mapper-event-1.02.170-6.el7_9.3|7 Tue Feb 2 09:59:51 2021
This is just a question for my learning.  When SQL set data is sent to Splunk via sql scripts, do you use sql syntax or do you utilize Splunk query linguistics? and can you format your rows and colum... See more...
This is just a question for my learning.  When SQL set data is sent to Splunk via sql scripts, do you use sql syntax or do you utilize Splunk query linguistics? and can you format your rows and columns in the same manner? I'm crowd sourcing to better build my report. 
I am trying to write a search that will compare data for the latest event with its previous event and show the difference if any for each host .I am trying to use earliest and latest event but I earl... See more...
I am trying to write a search that will compare data for the latest event with its previous event and show the difference if any for each host .I am trying to use earliest and latest event but I earliest doesnt take the immediate  preceding event  Following is the search I have tried but I dont think its right index=abc host=xyz | stats latest(id) as id  latest(SN) as SN latest(PN) as PN  latest(_time) as time by host |stats earliest(id) as eid   earliest(SN) as eSN  earliest(PN) as ePN   earliest(_time) as etime by host   Thanks in Advance Splunkers
Hello,  I need some help. I have a index where I pull all of the HR info for our employees then I have a CSV I bring in using the LOOKUP command, the CSV has al the of the machine info for each use... See more...
Hello,  I need some help. I have a index where I pull all of the HR info for our employees then I have a CSV I bring in using the LOOKUP command, the CSV has al the of the machine info for each user.  The CSV file has no Multi value fields. So if a user has multiple computers there would be a separate line in the CSV for that. what is happening when I run this code the splunk is creating Multi value field for all the machine info when a person has multiple computers.  I have tried a used MVexpand. But I have to do it on 14 different fields and then I get a Memory error. Which I can not increase the memory we have certain restrictions on that. Even if I do one MVexpand same memory error. the report will produce over 140000 entries. SO below is my basic code. that code will produce the multivalue fields which I do not want. I need the report when i user has multi machines it creates a completely separate line for that. this code does not include the MVexpand. Is there any way to do that without using mvexpand   (index=wss_desktop_os sourcetype="associate"LOGON_ID="*") LOCATION IN ("CA1*", "CA2*", "CA3*", "CA4*", "CA5*", "CA6*") | stats values(SPVR_FLL_NM) AS Supervisor, values(EMP_STA_TX) AS "Employee Status" values(SPVR_EMAIL_AD) AS "Supervisor Email", values(L2_LN1_AD) AS Address, values(L2_CTY_NM) AS City, values(SITE_COUNTRY) AS Country, values(DEPARTMENT) AS Department, values(DIV_LG_NM) AS Division, values(L2_FLR_NO) AS Floor, values(FLL_NM) AS FullName, values(LOCATION) AS Location, values(L2_CNY_CD) AS Region, values(L2_CNY_NM) AS SiteCountry, values(LOB) AS ORG, values(L2_STPV_NM) AS State, values(WRK_EMAIL_AD) AS Email by LOGON_ID | lookup local=true PrimaryUser.csv PrimaryUser AS LOGON_ID OUTPUT host AS host BuildNumber Cores DeviceType InstallDate LastBootUpTime LastReported Locale Manufacturer Model OSVer PCSystemType SerialNumber TotalPhysicalMemoryKB TotalVirtualMemoryKB | where host NOT null    
new splunk  i want to get syslog in splunk, should i install 3rd party app to get syslog? or any other way to get syslog from windows  i am using windows 10
Want to check if the ASA version 9.14 is supported for the "Splunk Add-on for Cisco ASA" 4.2 or 5.1??   It show Cisco ASA v9.12, v9.13, v9.16,v9.17. Is there a reason v9.14 is not shown or supported?
Hello, i need to de  delete some old logs on my cloud instance because i run out of space    is there any way to remove old logs with out using the delete comand on a search ? i need to ... See more...
Hello, i need to de  delete some old logs on my cloud instance because i run out of space    is there any way to remove old logs with out using the delete comand on a search ? i need to to remove this logs only by the web interface Thanks 
I have successfully implemented this thing in my personal account and it works fine with the local HEC -> splunk cloud,  but in my organization's environment local HEC gives a successful response cod... See more...
I have successfully implemented this thing in my personal account and it works fine with the local HEC -> splunk cloud,  but in my organization's environment local HEC gives a successful response code but the indexer fails to index the data because HEC couldn't send data to Splunk cloud indexers.    I noticed that I am not able to connect to the indexer's domain name or IP address on port 9997 do I need to explicitly allow any rule from Splunk cloud for indexers?
Hi Team, Is there a way to edit the KV store data? I can see some of the columns are hidden due to which I am not able to extract the data in the query.  Can anyone help me to troubleshoot this... See more...
Hi Team, Is there a way to edit the KV store data? I can see some of the columns are hidden due to which I am not able to extract the data in the query.  Can anyone help me to troubleshoot this issue? Thanks in advance.
Hi,   We have onboarded ping federate logs in splunk but we are getting multiple logs getting clubbed in one. Can someone help me why is this happening and how can I rectify this. Below is the lo... See more...
Hi,   We have onboarded ping federate logs in splunk but we are getting multiple logs getting clubbed in one. Can someone help me why is this happening and how can I rectify this. Below is the log sample. {"owner": "689186784177", "logGroup": "/aws/containerinsights/prod/application", "logStream": "pingaccess-was-admin-0_ping-cloud_pingaccess-was-admin", "logEvents": [{"id": "37013848036576111097453574573076270650086865051558412288", "timestamp": 1659758349197, "message": {"log": "<134>Aug 6 03:59:04 pingaccess-was-admin-0 , {\"thread\":\"ReqProc-StageThread[2]/LAUQHf9OalMaVcCzi173zw\",\"level\":\"INFO\",\"loggerName\":\"apiaudit\",\"message\":\"PA Audit with data \",\"endOfBatch\":false,\"loggerFqcn\":\"org.apache.logging.slf4j.Log4jLogger\",\"instant\":{\"epochSecond\":1659758344,\"nanoOfSecond\":121700000},\"threadId\":64,\"threadPriority\":5,\"date\":\"2022-08-06T03:59:04+0000\",\"exchangeId\":\"LAUQHf9OalMaVcCzi173zw\",\"roundTripMS\":\"1\",\"subject\":\"Administrator\",\"authMech\":\"Basic\",\"client\":\"127.0.0.1\",\"method\":\"GET\",\"resource\":\"* [] /pa-admin-api/v3 /version:-1\",\"requestUri\":\"/pa-admin-api/v3/version\",\"responseCode\":\"200\",\"logPurpose\":\"Audit_SIEM\"}\r\n", "stream": "stdout", "docker": {"container_id": "074987ed295dde1388b9e983bfe7bef9e7140a420a16070a42b57cc5a68ba0be"}, "kubernetes": {"container_name": "pingaccess-was-admin", "namespace_name": "ping-cloud", "pod_name": "pingaccess-was-admin-0", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingaccess-was:6.3.1-v1.0.19", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingaccess-was@sha256:f5d3fea0441c96815e1c89d108d9e57c3a8db0e3809a7c597a5000754a9b22ec", "pod_id": "1b0c2182-cdf3-44aa-aecb-26d9ed968169", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingaccess-was-cluster", "controller-revision-hash": "pingaccess-was-admin-66c966594f", "role": "pingaccess-was-admin", "statefulset_kubernetes_io/pod-name": "pingaccess-was-admin-0"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}}}, {"id": "37013848148102137835305220903896397731601320942966472705", "timestamp": 1659758354198, "message": {"log": "<134>Aug 6 03:59:09 pingaccess-was-admin-0 , {\"thread\":\"ReqProc-StageThread[4]/vbAQ-x_FAX3vzYPGAFJILA\",\"level\":\"INFO\",\"loggerName\":\"apiaudit\",\"message\":\"PA Audit with data \",\"endOfBatch\":false,\"loggerFqcn\":\"org.apache.logging.slf4j.Log4jLogger\",\"instant\":{\"epochSecond\":1659758349,\"nanoOfSecond\":145851000},\"threadId\":69,\"threadPriority\":5,\"date\":\"2022-08-06T03:59:09+0000\",\"exchangeId\":\"vbAQ-x_FAX3vzYPGAFJILA\",\"roundTripMS\":\"1\",\"subject\":\"Administrator\",\"authMech\":\"Basic\",\"client\":\"127.0.0.1\",\"method\":\"GET\",\"resource\":\"* [] /pa-admin-api/v3 /version:-1\",\"requestUri\":\"/pa-admin-api/v3/version\",\"responseCode\":\"200\",\"logPurpose\":\"Audit_SIEM\"}\r\n", "stream": "stdout", "docker": {"container_id": "074987ed295dde1388b9e983bfe7bef9e7140a420a16070a42b57cc5a68ba0be"}, "kubernetes": {"container_name": "pingaccess-was-admin", "namespace_name": "ping-cloud", "pod_name": "pingaccess-was-admin-0", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingaccess-was:6.3.1-v1.0.19", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingaccess-was@sha256:f5d3fea0441c96815e1c89d108d9e57c3a8db0e3809a7c597a5000754a9b22ec", "pod_id": "1b0c2182-cdf3-44aa-aecb-26d9ed968169", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingaccess-was-cluster", "controller-revision-hash": "pingaccess-was-admin-66c966594f", "role": "pingaccess-was-admin", "statefulset_kubernetes_io/pod-name": "pingaccess-was-admin-0"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}}}], "envType": "prod"}   {"owner": "689186784177", "logGroup": "/aws/containerinsights/prod/application", "logStream": "server_logs.pingfederate-1_ping-cloud_pingfederate", "logEvents": [{"id": "37013848024578310180644099321922695591001628886545793024", "timestamp": 1659758348659, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:02,867 tid:qf8Wb6tGdzy8PN4tl93rHepNMR0 DEBUG [org.sourceid.websso.servlet.IntegrationControllerServlet] GET: https://localhost:9031/pf/heartbeat.ping\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848024578310180644099321922695591001628886545793025", "timestamp": 1659758348659, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:02,868 tid:qf8Wb6tGdzy8PN4tl93rHepNMR0 DEBUG [org.sourceid.servlet.HttpServletRespProxy] flush cookies: adding Cookie{PF=hashedValue:qf8Wb6tGdzy8PN4tl93rHepNMR0; path=/; maxAge=-1; domain=null}\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848024578310180644099321922695591001628886545793026", "timestamp": 1659758348659, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:07,871 DEBUG [org.sourceid.util.log.internal.TrackingIdSupport] The incoming request does not contain a unique identifier. Assigning auto-generated request ID: OdFVgBFxp1JYNoiCQenljYo0I\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848085437043827434169875173670757059007436366348291", "timestamp": 1659758351388, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:07,871 DEBUG [org.sourceid.servlet.HttpServletRespProxy] adding lazy cookie Cookie{PF=hashedValue:VizAUYH0x9Lu7GbN_Rqv9fevZ7c; path=/; maxAge=-1; domain=null} replacing null\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848091480545776235968746529850408946713404487041028", "timestamp": 1659758351659, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:07,871 tid:VizAUYH0x9Lu7GbN_Rqv9fevZ7c DEBUG [org.sourceid.websso.servlet.IntegrationControllerServlet] GET: https://localhost:9031/pf/heartbeat.ping\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848091480545776235968746529850408946713404487041029", "timestamp": 1659758351659, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:07,871 tid:VizAUYH0x9Lu7GbN_Rqv9fevZ7c DEBUG [org.sourceid.servlet.HttpServletRespProxy] flush cookies: adding Cookie{PF=hashedValue:VizAUYH0x9Lu7GbN_Rqv9fevZ7c; path=/; maxAge=-1; domain=null}\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848091480545776235968746529850408946713404487041030", "timestamp": 1659758351659, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:11,388 DEBUG [org.sourceid.util.log.internal.TrackingIdSupport] The incoming request does not contain a unique identifier. Assigning auto-generated request ID: ssP8SFNvaJjisaEtFgH9aIN3q\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}, {"id": "37013848118464447466458022747788069518851230826723344391", "timestamp": 1659758352869, "message": {"log": "/opt/out/instance/log/server.log 2022-08-06 03:59:11,388 DEBUG [org.sourceid.servlet.HttpServletRespProxy] adding lazy cookie Cookie{PF=hashedValue:hFl_yLtHqOydi68KkvReugURSyc; path=/; maxAge=-1; domain=null} replacing null\n", "stream": "stdout", "docker": {"container_id": "1f9796fb6abf364e5be93bf7bda4242ad75d987b80141f19ff62a2ddbe7ac2ce"}, "kubernetes": {"container_name": "pingfederate", "namespace_name": "ping-cloud", "pod_name": "pingfederate-1", "container_image": "public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate:10.3.5-v1.0.23-no-IKs", "container_image_id": "docker-pullable://public.ecr.aws/r2h3l6e4/pingcloud-apps/pingfederate@sha256:b781191a0a206d4779e4959c7f0cc14ec9a8022692a0481bbf38438dad49a7be", "pod_id": "278d7ef7-a6c1-47d0-a65f-7f5ad711fcd8", "host": "ip-10-10-117-227.us-east-2.compute.internal", "labels": {"app": "ping-cloud", "cluster": "pingfederate-cluster", "controller-revision-hash": "pingfederate-75b6bfc8fd", "role": "pingfederate-engine", "statefulset_kubernetes_io/pod-name": "pingfederate-1"}, "master_url": "https://172.20.0.1:443/api", "namespace_id": "38da143e-d00d-489a-9284-560673491f1d", "namespace_labels": {"app": "ping-cloud", "app_kubernetes_io/instance": "ping-cloud-master-us-east-2", "kubernetes_io/metadata_name": "ping-cloud"}}, "stream_name": "pingfederate-1_ping-cloud_pingfederate"}}], "envType": "prod"}  
Hi here, I am trying to build a Splunk alert with Slack, to pass a table column of value as an array of value, eg.   Result Table =========== Field1 Field2 A1 B1 A2 B2... See more...
Hi here, I am trying to build a Splunk alert with Slack, to pass a table column of value as an array of value, eg.   Result Table =========== Field1 Field2 A1 B1 A2 B2   Expected Alert Message =========== Field1 : ["A1", "A2"]   I am currently referencing the following documentation, with the result token $result.Field1$. However, it shows only the value on the 1st row, ie. Field1 : A1. I wonder is it possible to have the alert message done, with an array of value instead ? Thanks in advance ! https://docs.splunk.com/Documentation/Splunk/8.2.1/Alert/EmailNotificationTokens  https://github.com/splunk/slack-alerts/issues/30 
*Note: Just sharing this issue with resolution. Issue: Splunk_TA_snow upgrade issue Description: There is a requirement to upgrade ServiceNow TA from 7.1.1 to 7.4.0. After the upgrade, we’ve noti... See more...
*Note: Just sharing this issue with resolution. Issue: Splunk_TA_snow upgrade issue Description: There is a requirement to upgrade ServiceNow TA from 7.1.1 to 7.4.0. After the upgrade, we’ve noticed that the “Configuration” section is not working (see screenshot below). Error found in /opt/splunk/var/log/splunk/splunkd.log: 07-29-2022 05:00:47.312 +0000 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 124, in wrapper\n    for name, data, acl in meth(self, *args, **kwargs):\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 345, in _format_all_response\n    self._encrypt_raw_credentials(cont["entry"])\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 375, in _encrypt_raw_credentials\n    change_list = rest_credentials.decrypt_all(data)\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/credentials.py", line 293, in decrypt_all\n    all_passwords = credential_manager._get_all_passwords()\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/utils.py", line 153, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/credentials.py", line 283, in _get_all_passwords\n    clear_password += field_clear[index]\nTypeError: can only concatenate str (not "NoneType") to str\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (mostrecent call last):\n  File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 113, in init_persistent\n    hand.execute(info)\n  File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 636, in execute\n    if self.requestedAction == ACTION_LIST:     self.handleList(confInfo)\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/admin_external.py", line 63, in wrapper\n    for entity in result:\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 131, in wrapper\n    raise RestError(500, traceback.format_exc())\nsplunktaucclib.rest_handler.error.RestError: REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 124, in wrapper\n    for name, data, acl in meth(self, *args, **kwargs):\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 345, in _format_all_response\n    self._encrypt_raw_credentials(cont["entry"])\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/handler.py", line 375, in _encrypt_raw_credentials\n    change_list = rest_credentials.decrypt_all(data)\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunktaucclib/rest_handler/credentials.py", line 293, in decrypt_all\n    all_passwords = credential_manager._get_all_passwords()\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/utils.py", line 153, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/credentials.py", line 283, in _get_all_passwords\n    clear_password += field_clear[index]\nTypeError: can only concatenate str (not "NoneType") to str\n\n   Solution: Initiated fresh install of Splunk_TA_snow because upgrading the app gives the same error. We’ve found out there are conflict for the following apps (splunk_app_addon-builder, Splunk_TA_microsoft-cloudservices, TA-defender-atp-hunting). After removing these apps, Splunk_TA_snow is now working as expected.