All Topics

Top

All Topics

Have following data in the logfile    {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC5"} {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC6","account":"verified"} {xxxx},{"GUID":"5561859B8D624FFC8F... See more...
Have following data in the logfile    {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC5"} {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC6","account":"verified"} {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC7","account":"unverified"} {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC8","account":"verified"} {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC9"}   Need Report like the following so that I should get the count of "verified" where it is explicitly mentioned otherwise it should show under "unverified" -   Type Count Verified 2 Unverified 3   How can we achieve this. Will appreciate your inputs!
Im trying to break out the comma separated values in my results but im brain farting. I want to break out the specific reasons - {New Geo-Location=NEGATIVE, New Device=POSITIVE, New IP=NEGATIVE, New ... See more...
Im trying to break out the comma separated values in my results but im brain farting. I want to break out the specific reasons - {New Geo-Location=NEGATIVE, New Device=POSITIVE, New IP=NEGATIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE} index="okta" actor.alternateId="*mydomain*" outcome.reason=*CHALLENGE* client.geographicalContext.country!="" actor.displayName!="Okta System" AND NOT "okta_svc_acct" | bin _time span=45d | stats count by outcome.reason, debugContext.debugData.behaviors | sort -count outcome.reason debugContext.debugData.behaviors Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=NEGATIVE, New Device=POSITIVE, New IP=NEGATIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE} Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=NEGATIVE, New Device=NEGATIVE, New IP=NEGATIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE} Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=NEGATIVE, New Device=POSITIVE, New IP=POSITIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE} Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=POSITIVE, New Device=POSITIVE, New IP=POSITIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=POSITIVE} Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=NEGATIVE, New Device=NEGATIVE, New IP=POSITIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE}
Hi to all, I'm a newbee in Splunk and I need to check If the Splunk Cloud is receiving traffic form our network infrastructure. I have thought to do via API request but I don't find the url where to... See more...
Hi to all, I'm a newbee in Splunk and I need to check If the Splunk Cloud is receiving traffic form our network infrastructure. I have thought to do via API request but I don't find the url where to do the request. Could anybody to send me where I can find documentation to do this??? Or how can I do this?? Thanks in advance!! David.
Hello Im trying to run a chart command grouped by 2 fields but im getting an error: this is my query :   | chart values(SuccessRatioBE) as SuccessRatioBE over _time by UserAgent LoginType ... See more...
Hello Im trying to run a chart command grouped by 2 fields but im getting an error: this is my query :   | chart values(SuccessRatioBE) as SuccessRatioBE over _time by UserAgent LoginType   and im getting this error : "Error in 'chart' command: The argument 'LoginType' is invalid." I also tried with comma to separate between the fields and ticks also
Tech Talk Starting With Observability: OpenTelemetry Best Practices   In this Tech Talk , learn about what OpenTelemetry is, why it’s the future of Observability, and how to use the Splunk... See more...
Tech Talk Starting With Observability: OpenTelemetry Best Practices   In this Tech Talk , learn about what OpenTelemetry is, why it’s the future of Observability, and how to use the Splunk Distribution of the OpenTelemetry collector in a demo. We’ll also discuss design considerations about an OpenTelemetry collector deployment and the components of the OpenTelemetry pipeline and how they can impact performance.   Read the Tech Talk blog, Observability Unveiled: Navigating OpenTelemetry's Framework and Deployment Options  
Hello,   I hope everything is okay.   I need your help.   I am using this spl request : "index="bloc1rg" AND libelle IN (IN_PREC, OUT_PREC, IN_BT, OUT_BT, IN_RANG, OUT_RANG) earliest=-1... See more...
Hello,   I hope everything is okay.   I need your help.   I am using this spl request : "index="bloc1rg" AND libelle IN (IN_PREC, OUT_PREC, IN_BT, OUT_BT, IN_RANG, OUT_RANG) earliest=-1mon@mon latest=-1d@d | append [search index="bloc1rg" AND libelle IN (IN_PREC, OUT_PREC, IN_BT, OUT_BT, IN_RANG, OUT_RANG) earliest=-1mon@mon latest=-1d@d | chart count over id_flux by libelle | eval IN_BT_OUT_BT=IN_BT+OUT_BT | eval IN_PREC_OUT_PREC=IN_PREC+OUT_PREC | eval IN_RANG_OUT_RANG=IN_RANG+OUT_RANG | where IN_BT_OUT_BT>=2 | where IN_PREC_OUT_PREC >=2 | where IN_RANG_OUT_RANG >=2 | transpose | search column=id_flux | transpose | fields - "column" | rename "row 1" as id_flux] | stats last(_time) as last_time by id_flux libelle "   I have this results : I can't get what I want. Let me explain. For a given id_flux, I'd like to have the response time defined as follows: - out_rang time - in_rang time - time out_prec - time in_prec -time out_bt - time in_bt     Voilà ce que j'ai utilisé comme requête complète : search index="bloc1rg" AND libelle IN (IN_PREC, OUT_PREC, IN_BT, OUT_BT, IN_RANG, OUT_RANG) earliest=-1mon@mon latest=-1d@d |chart count over id_flux by libelle |eval IN_BT_OUT_BT=IN_BT+OUT_BT |eval IN_PREC_OUT_PREC=IN_PREC+OUT_PREC |eval IN_PREC_OUT_PREC=IN_PREC+OUT_PREC | eval IN_RANG_OUT_RANG=IN_RANG+OUT_RANG |where IN_BT_OUT_BT>=2 |where IN_PREC_OUT_PREC >=2 |where IN_RANG_OUT_RANG >=2 |transpose |search column=id_flux |transpose |fields - "column" |rename "row 1" as id_flux] | eval sortorder=case(libelle=="IN_PREC",1,libelle=="OUT_PREC" AND statut=="KO",2,libelle=="OUT_PREC" AND statut=="OK",3,libelle=="IN_BT",4,libelle=="OUT_BT",5, libelle=="IN_RANG",6, libelle=="OUT_RANG" AND statut=="KO",7, libelle=="OUT_RANG" AND statut=="OK",8) | sort 0 sortorder |eval libelle=if(sortorder=2,"ARE", (if (sortorder=3,"AEE", (if(sortorder=7, "BAN",(if(sortorder=8, "CCO", libelle))))))) |table libelle sortorder _time |chart avg(_time) over sortorder by libelle | filldown AEE, ARE, IN_BT, IN_PREC, OUT_BT, IN_RANG, OUT_RANG |eval OK=abs(OUT_BT-IN_BT)/1000 |eval AEE=abs(AEE-IN_PREC)/1000 |eval ARE=abs(ARE-IN_PREC)/1000 |eval CCO=abs(CCO-IN_RANG) |eval BAN=abs(BAN-IN_RANG) |fields - sortorder |stats values(*) as * |table AEE ARE BAN CCO OK |transpose |rename "row 1" as "temps de traitement (s)" |rename column as "statut"  
I would like to understand better how transformations work, in terms of priority and data flow. Let's say I have 3 transformations in place, that I want to apply to a sourcetype (e.g. "test"): [tr1... See more...
I would like to understand better how transformations work, in terms of priority and data flow. Let's say I have 3 transformations in place, that I want to apply to a sourcetype (e.g. "test"): [tr1] [tr2] [tr3] Can somebody explain to me what the difference is (if any) between these 2 props.conf: ----1---- [test] TRANSFORMS-1=tr1 TRANSFORMS-2=tr2 TRANSFORMS-3=tr3 ---2---- [test] TRANSFORMS-1=tr1,tr2,tr3   Is the resulting data transformation the same on both cases? Thank you!
Hi Community,   We have this wierd situation where one of the newest splunk installs (3 months old) went out of space - the capacity of the server was 500GB. When I checked the size of each ondex ... See more...
Hi Community,   We have this wierd situation where one of the newest splunk installs (3 months old) went out of space - the capacity of the server was 500GB. When I checked the size of each ondex in GUI, the size were all under limit. The sum of all were under 250 Gb, which made sense as the size of all index is set to 500GB (default). But when I calculated the size of the data models associated with the index, I could see that the data models had used almost 250Gb.  My understanding was that the data models should be also be included under the index capacity, but it seemed be exceeding the limits.   Can anyone please throw some light on this topic?   Regards, Pravin
Dear team, Am using windows machine agent to capture default metrics ,but am getting the following error in client side when enabled the debug mode. com.google.inject.ProvisionException: Unable to ... See more...
Dear team, Am using windows machine agent to capture default metrics ,but am getting the following error in client side when enabled the debug mode. com.google.inject.ProvisionException: Unable to provision, see the following errors:   1) [Guice/ErrorInCustomProvider]: IllegalArgumentException: Cipher cannot be initialized   while locating LocalSimCollectorScriptPathProvider   at PathRawCollectorModule.configure(PathRawCollectorModule.java:25)       \_ installed by: ServersExtensionModule -> PathRawCollectorModule   at SimCollectorProcessBuilderFromPathProvider.<init>(SimCollectorProcessBuilderFromPathProvider.java:42)       \_ for 1st parameter   while locating SimCollectorProcessBuilderFromPathProvider   while locating Optional<RawCollectorUtil$ICollectorProcessBuilder>   Learn more:   https://github.com/google/guice/wiki/ERROR_IN_CUSTOM_PROVIDER   1 error   ====================== Full classname legend: ====================== LocalSimCollectorScriptPathProvider:        "com.appdynamics.sim.agent.extensions.servers.collector.LocalSimCollectorScriptPathProvider" Optional:                                   "com.google.common.base.Optional" PathRawCollectorModule:                     "com.appdynamics.sim.agent.extensions.servers.collector.PathRawCollectorModule" RawCollectorUtil$ICollectorProcessBuilder:  "com.appdynamics.sim.agent.extensions.servers.collector.RawCollectorUtil$ICollectorProcessBuilder" ServersExtensionModule:                     "com.appdynamics.sim.agent.extensions.servers.ServersExtensionModule" SimCollectorProcessBuilderFromPathProvider: "com.appdynamics.sim.agent.extensions.servers.collector.SimCollectorProcessBuilderFromPathProvider" ======================== End of classname legend: ========================   at com.google.inject.internal.InternalProvisionException.toProvisionException(InternalProvisionException.java:251) ~[guice-5.1.0.jar:?] at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1104) ~[guice-5.1.0.jar:?] at com.appdynamics.sim.agent.extensions.servers.model.windows.WindowsRawCollector.collectRawData(WindowsRawCollector.java:84) ~[servers-23.7.0.3689.jar:?] at com.appdynamics.sim.agent.extensions.servers.model.windows.WindowsRawCollector.collectRawData(WindowsRawCollector.java:49) ~[servers-23.7.0.3689.jar:?] at com.appdynamics.sim.agent.extensions.servers.model.Server.collectAndReport(Server.java:65) ~[servers-23.7.0.3689.jar:?] at com.appdynamics.sim.agent.extensions.servers.ServersDataCollector.run(ServersDataCollector.java:94) [servers-23.7.0.3689.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [?:1.8.0_101] at java.util.concurrent.FutureTask.runAndReset(Unknown Source) [?:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source) [?:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) [?:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_101] at java.lang.Thread.run(Unknown Source) [?:1.8.0_101] Caused by: java.lang.IllegalArgumentException: Cipher cannot be initialized at com.appdynamics.agent.sim.encryption.MessageDigestDecryptionService.decryptInputStream(MessageDigestDecryptionService.java:92) ~[machineagent.jar:Machine Agent v23.7.0.3689 GA compatible with 4.4.1.0 Build Date 2023-07-20 08:59:11] at com.appdynamics.sim.agent.extensions.servers.collector.LocalSimCollectorScriptPathProvider.get(LocalSimCollectorScriptPathProvider.java:77) ~[?:?] at com.appdynamics.sim.agent.extensions.servers.collector.LocalSimCollectorScriptPathProvider.get(LocalSimCollectorScriptPathProvider.java:34) ~[?:?] at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:86) ~[guice-5.1.0.jar:?] at com.google.inject.internal.BoundProviderFactory.provision(BoundProviderFactory.java:72) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:60) ~[guice-5.1.0.jar:?] at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:59) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) ~[guice-5.1.0.jar:?] at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:169) ~[guice-5.1.0.jar:?] at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) ~[guice-5.1.0.jar:?] at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) ~[guice-5.1.0.jar:?] at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300) ~[guice-5.1.0.jar:?] at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:58) ~[guice-5.1.0.jar:?] at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1101) ~[guice-5.1.0.jar:?] ... 11 more Caused by: java.security.InvalidKeyException: Illegal key size at javax.crypto.Cipher.checkCryptoPerm(Cipher.java:1039) ~[?:1.8.0_71] at javax.crypto.Cipher.init(Cipher.java:1393) ~[?:1.8.0_71] at javax.crypto.Cipher.init(Cipher.java:1327) ~[?:1.8.0_71] at com.appdynamics.agent.sim.encryption.MessageDigestDecryptionService.decryptInputStream(MessageDigestDecryptionService.java:90) ~[machineagent.jar:Machine Agent v23.7.0.3689 GA compatible with 4.4.1.0 Build Date 2023-07-20 08:59:11] at com.appdynamics.sim.agent.extensions.servers.collector.LocalSimCollectorScriptPathProvider.get(LocalSimCollectorScriptPathProvider.java:77) ~[?:?] at com.appdynamics.sim.agent.extensions.servers.collector.LocalSimCollectorScriptPathProvider.get(LocalSimCollectorScriptPathProvider.java:34) ~[?:?] at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:86) ~[guice-5.1.0.jar:?] at com.google.inject.internal.BoundProviderFactory.provision(BoundProviderFactory.java:72) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:60) ~[guice-5.1.0.jar:?] at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:59) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) ~[guice-5.1.0.jar:?] at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:169) ~[guice-5.1.0.jar:?] at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) ~[guice-5.1.0.jar:?] at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) ~[guice-5.1.0.jar:?] at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300) ~[guice-5.1.0.jar:?] at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:58) ~[guice-5.1.0.jar:?] at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1101) ~[guice-5.1.0.jar:?] ... 11 more   Any insights would be a great help   Regards Sathish
Hello everyone!  Do anybody know, is it possible to aggregate (bind) auditd events (I mean logs from audit/audit.log) in one by Record ID (Event ID)? I want to make in on parsing study, and to get... See more...
Hello everyone!  Do anybody know, is it possible to aggregate (bind) auditd events (I mean logs from audit/audit.log) in one by Record ID (Event ID)? I want to make in on parsing study, and to get in my index already aggregated events in one.
Hello, We are investigating if we can install with helm Splunk OpenTelemetry Collector for Kubernetes to collect and ingest our logs to Splunk Cloud. We would like to split the system log from the ... See more...
Hello, We are investigating if we can install with helm Splunk OpenTelemetry Collector for Kubernetes to collect and ingest our logs to Splunk Cloud. We would like to split the system log from the other logs into two different indexes. Reading the documentation I saw that it is possible to indicate the index as an annotation in the namespaces or pods, but in the values.yaml of the helm the index field is required, but it seems to be usable for only one index. In summary we will want to use two different indexes, setting one as default and the other using namespace annotations. Could you kindly show me a configuration for our problem?
Is it possible to have the true and false parts of an if statement contain eval statements.     | eval pwdExpire=if(type="staff", | eval relative_time(_time, "+90day") , | eval relative_time(_ti... See more...
Is it possible to have the true and false parts of an if statement contain eval statements.     | eval pwdExpire=if(type="staff", | eval relative_time(_time, "+90day") , | eval relative_time(_time, "+180day") )     Desired results is: If type="staff" calculate pwdExpiry as _time + 90 days, else calculate pwdExpiry as _time + 180 days. I will then format pwdExpiry and display in a table.
Hi,  I want to restrict access to different teams based on hosts but don't want to do it by creating multiple indexes for this. The data would be present in one index and teams would be given access... See more...
Hi,  I want to restrict access to different teams based on hosts but don't want to do it by creating multiple indexes for this. The data would be present in one index and teams would be given access to this index, however they should be able to see only the data they own. Is there a way host-based restriction can be achieved? 
I am using ITSI's KPI-based search for text log monitoring. If the text logs match the search criteria, the flow is to send an alert via email. I would like to quote the contents of the text logs tha... See more...
I am using ITSI's KPI-based search for text log monitoring. If the text logs match the search criteria, the flow is to send an alert via email. I would like to quote the contents of the text logs that matched the detection criteria in the body of the email. Is it possible to implement such requirements with Splunk ITSI? If so, I would like to know the detailed content of the implementation. If not, I would like to know the reason why.
Hi All, Is there a way to retrieve a specific alert without using short ID in the incident review page? I was thinking of using "rule_id" field or "event_hash" of the alert, but couldn't be able to... See more...
Hi All, Is there a way to retrieve a specific alert without using short ID in the incident review page? I was thinking of using "rule_id" field or "event_hash" of the alert, but couldn't be able to pull the specific alert. Please suggest any other alternate method other than using short id. Thanks.
We are currently using a regex pattern to match events against our raw data, and it works perfectly when we use the search app. The pattern we are using is: C:\\Windows\\system32\\cmd\.exe*C:\\Progr... See more...
We are currently using a regex pattern to match events against our raw data, and it works perfectly when we use the search app. The pattern we are using is: C:\\Windows\\system32\\cmd\.exe*C:\\ProgramData\\Symantec\\Symantec Endpoint Protection\\14\.3\.8289\.5000\.105\\Data\\Definitions\\WebExtDefs\\20230830\.063\\webextbridge\.exe* However, when we try to use this regex pattern in a lookup table, the events are not being matched. This seems to be because of the wildcard in the pattern. Despite defining the field name in the lookup definition (e.g., WILDCARD(process)), it still doesn't match the events. I'm wondering if Splunk lookup supports wildcards within strings, or does it only support them at the beginning and end of strings? Any insights or guidance on this matter would be greatly appreciated. Regards VK
Hi, we are logging api requests in Splunk. I would like to create a sort of health check table where every column represents the status code of the last API call in previous 5 minutes. While each ... See more...
Hi, we are logging api requests in Splunk. I would like to create a sort of health check table where every column represents the status code of the last API call in previous 5 minutes. While each row is a different API. Here an example of what the output should be Any Idea how I could achieve that in Splunk? Each row represents a different API ( request.url), while the status code is stored in response.status Thank you
Hi All, Below is my query-   index=idx-sec-cloud sourcetype=rubrik:json NOT summary="*on demand backup*" (custom_details.eventName="Snapshot.BackupFailed" NOT (custom_details.errorId="Oracle.Rman... See more...
Hi All, Below is my query-   index=idx-sec-cloud sourcetype=rubrik:json NOT summary="*on demand backup*" (custom_details.eventName="Snapshot.BackupFailed" NOT (custom_details.errorId="Oracle.RmanStatusDetailsEmpty")) OR (custom_details.eventName="Mssql.LogBackupFailed") OR (custom_details.eventName="Snapshot.BackupFromLocationFailed" NOT (custom_details.errorId="Fileset.FailedDataThresholdNas" OR custom_details.errorId="Fileset.FailedFileThresholdNas" OR custom_details.errorId="Fileset.FailedToFindFilesNas")) OR (custom_details.eventName="Vmware.VcenterRefreshFailed") OR (custom_details.eventName="Hawkeye.IndexOperationOnLocationFailed") OR (custom_details.eventName="Hawkeye.IndexRetryFailed") OR (custom_details.eventName="Storage.SystemStorageThreshold") OR (custom_details.eventName="ClusterOperation.DiskLost") OR (custom_details.eventName="ClusterOperation.DiskUnhealthy") OR (custom_details.eventName="Hardware.DimmError") OR (custom_details.eventName="Hardware.PowerSupplyNeedsReplacement") | eventstats earliest(_time) AS early_time latest(_time) AS late_time     I am trying to write a condition that excludes all custom_details.eventName="Mssql.LogBackupFailed" (check line 3) unless it the count is greater than 2. Now sure how to proceed.   Thanks in advance.
Hello, I would like to pass one of the columns/fields from one of my Splunk search results as an input to a lambda function using "Add to SNS alert" action option.  The value I am trying to pass is... See more...
Hello, I would like to pass one of the columns/fields from one of my Splunk search results as an input to a lambda function using "Add to SNS alert" action option.  The value I am trying to pass is a url, which I have saved under field "url" and in the action options, am entering the AWS account name, topic name but am unable to understand how/what exactly should I pass in the message field. Currently, this is what I have set the message field to $result.url$, but its not working. Please can someone help me with this.
hello after I upgraded Splunk to the 9.1.1 version, some parts of the overview page in the distributed monitoring console were not completed and were empty. other tabs in the distributed monitoring ... See more...
hello after I upgraded Splunk to the 9.1.1 version, some parts of the overview page in the distributed monitoring console were not completed and were empty. other tabs in the distributed monitoring console do not have any problems. Does anyone have any idea what is the problem? Thanks