All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I would like to understand better how transformations work, in terms of priority and data flow. Let's say I have 3 transformations in place, that I want to apply to a sourcetype (e.g. "test"): [tr1... See more...
I would like to understand better how transformations work, in terms of priority and data flow. Let's say I have 3 transformations in place, that I want to apply to a sourcetype (e.g. "test"): [tr1] [tr2] [tr3] Can somebody explain to me what the difference is (if any) between these 2 props.conf: ----1---- [test] TRANSFORMS-1=tr1 TRANSFORMS-2=tr2 TRANSFORMS-3=tr3 ---2---- [test] TRANSFORMS-1=tr1,tr2,tr3   Is the resulting data transformation the same on both cases? Thank you!
Hi Community,   We have this wierd situation where one of the newest splunk installs (3 months old) went out of space - the capacity of the server was 500GB. When I checked the size of each ondex ... See more...
Hi Community,   We have this wierd situation where one of the newest splunk installs (3 months old) went out of space - the capacity of the server was 500GB. When I checked the size of each ondex in GUI, the size were all under limit. The sum of all were under 250 Gb, which made sense as the size of all index is set to 500GB (default). But when I calculated the size of the data models associated with the index, I could see that the data models had used almost 250Gb.  My understanding was that the data models should be also be included under the index capacity, but it seemed be exceeding the limits.   Can anyone please throw some light on this topic?   Regards, Pravin
Dear team, Am using windows machine agent to capture default metrics ,but am getting the following error in client side when enabled the debug mode. com.google.inject.ProvisionException: Unable to ... See more...
Dear team, Am using windows machine agent to capture default metrics ,but am getting the following error in client side when enabled the debug mode. com.google.inject.ProvisionException: Unable to provision, see the following errors:   1) [Guice/ErrorInCustomProvider]: IllegalArgumentException: Cipher cannot be initialized   while locating LocalSimCollectorScriptPathProvider   at PathRawCollectorModule.configure(PathRawCollectorModule.java:25)       \_ installed by: ServersExtensionModule -> PathRawCollectorModule   at SimCollectorProcessBuilderFromPathProvider.<init>(SimCollectorProcessBuilderFromPathProvider.java:42)       \_ for 1st parameter   while locating SimCollectorProcessBuilderFromPathProvider   while locating Optional<RawCollectorUtil$ICollectorProcessBuilder>   Learn more:   https://github.com/google/guice/wiki/ERROR_IN_CUSTOM_PROVIDER   1 error   ====================== Full classname legend: ====================== LocalSimCollectorScriptPathProvider:        "com.appdynamics.sim.agent.extensions.servers.collector.LocalSimCollectorScriptPathProvider" Optional:                                   "com.google.common.base.Optional" PathRawCollectorModule:                     "com.appdynamics.sim.agent.extensions.servers.collector.PathRawCollectorModule" RawCollectorUtil$ICollectorProcessBuilder:  "com.appdynamics.sim.agent.extensions.servers.collector.RawCollectorUtil$ICollectorProcessBuilder" ServersExtensionModule:                     "com.appdynamics.sim.agent.extensions.servers.ServersExtensionModule" SimCollectorProcessBuilderFromPathProvider: "com.appdynamics.sim.agent.extensions.servers.collector.SimCollectorProcessBuilderFromPathProvider" ======================== End of classname legend: ========================   at com.google.inject.internal.InternalProvisionException.toProvisionException(InternalProvisionException.java:251) ~[guice-5.1.0.jar:?] at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1104) ~[guice-5.1.0.jar:?] at com.appdynamics.sim.agent.extensions.servers.model.windows.WindowsRawCollector.collectRawData(WindowsRawCollector.java:84) ~[servers-23.7.0.3689.jar:?] at com.appdynamics.sim.agent.extensions.servers.model.windows.WindowsRawCollector.collectRawData(WindowsRawCollector.java:49) ~[servers-23.7.0.3689.jar:?] at com.appdynamics.sim.agent.extensions.servers.model.Server.collectAndReport(Server.java:65) ~[servers-23.7.0.3689.jar:?] at com.appdynamics.sim.agent.extensions.servers.ServersDataCollector.run(ServersDataCollector.java:94) [servers-23.7.0.3689.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [?:1.8.0_101] at java.util.concurrent.FutureTask.runAndReset(Unknown Source) [?:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source) [?:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) [?:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_101] at java.lang.Thread.run(Unknown Source) [?:1.8.0_101] Caused by: java.lang.IllegalArgumentException: Cipher cannot be initialized at com.appdynamics.agent.sim.encryption.MessageDigestDecryptionService.decryptInputStream(MessageDigestDecryptionService.java:92) ~[machineagent.jar:Machine Agent v23.7.0.3689 GA compatible with 4.4.1.0 Build Date 2023-07-20 08:59:11] at com.appdynamics.sim.agent.extensions.servers.collector.LocalSimCollectorScriptPathProvider.get(LocalSimCollectorScriptPathProvider.java:77) ~[?:?] at com.appdynamics.sim.agent.extensions.servers.collector.LocalSimCollectorScriptPathProvider.get(LocalSimCollectorScriptPathProvider.java:34) ~[?:?] at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:86) ~[guice-5.1.0.jar:?] at com.google.inject.internal.BoundProviderFactory.provision(BoundProviderFactory.java:72) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:60) ~[guice-5.1.0.jar:?] at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:59) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) ~[guice-5.1.0.jar:?] at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:169) ~[guice-5.1.0.jar:?] at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) ~[guice-5.1.0.jar:?] at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) ~[guice-5.1.0.jar:?] at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300) ~[guice-5.1.0.jar:?] at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:58) ~[guice-5.1.0.jar:?] at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1101) ~[guice-5.1.0.jar:?] ... 11 more Caused by: java.security.InvalidKeyException: Illegal key size at javax.crypto.Cipher.checkCryptoPerm(Cipher.java:1039) ~[?:1.8.0_71] at javax.crypto.Cipher.init(Cipher.java:1393) ~[?:1.8.0_71] at javax.crypto.Cipher.init(Cipher.java:1327) ~[?:1.8.0_71] at com.appdynamics.agent.sim.encryption.MessageDigestDecryptionService.decryptInputStream(MessageDigestDecryptionService.java:90) ~[machineagent.jar:Machine Agent v23.7.0.3689 GA compatible with 4.4.1.0 Build Date 2023-07-20 08:59:11] at com.appdynamics.sim.agent.extensions.servers.collector.LocalSimCollectorScriptPathProvider.get(LocalSimCollectorScriptPathProvider.java:77) ~[?:?] at com.appdynamics.sim.agent.extensions.servers.collector.LocalSimCollectorScriptPathProvider.get(LocalSimCollectorScriptPathProvider.java:34) ~[?:?] at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:86) ~[guice-5.1.0.jar:?] at com.google.inject.internal.BoundProviderFactory.provision(BoundProviderFactory.java:72) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:60) ~[guice-5.1.0.jar:?] at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:59) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) ~[guice-5.1.0.jar:?] at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:169) ~[guice-5.1.0.jar:?] at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) ~[guice-5.1.0.jar:?] at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) ~[guice-5.1.0.jar:?] at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91) ~[guice-5.1.0.jar:?] at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300) ~[guice-5.1.0.jar:?] at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:58) ~[guice-5.1.0.jar:?] at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1101) ~[guice-5.1.0.jar:?] ... 11 more   Any insights would be a great help   Regards Sathish
Hello everyone!  Do anybody know, is it possible to aggregate (bind) auditd events (I mean logs from audit/audit.log) in one by Record ID (Event ID)? I want to make in on parsing study, and to get... See more...
Hello everyone!  Do anybody know, is it possible to aggregate (bind) auditd events (I mean logs from audit/audit.log) in one by Record ID (Event ID)? I want to make in on parsing study, and to get in my index already aggregated events in one.
Hello, We are investigating if we can install with helm Splunk OpenTelemetry Collector for Kubernetes to collect and ingest our logs to Splunk Cloud. We would like to split the system log from the ... See more...
Hello, We are investigating if we can install with helm Splunk OpenTelemetry Collector for Kubernetes to collect and ingest our logs to Splunk Cloud. We would like to split the system log from the other logs into two different indexes. Reading the documentation I saw that it is possible to indicate the index as an annotation in the namespaces or pods, but in the values.yaml of the helm the index field is required, but it seems to be usable for only one index. In summary we will want to use two different indexes, setting one as default and the other using namespace annotations. Could you kindly show me a configuration for our problem?
Is it possible to have the true and false parts of an if statement contain eval statements.     | eval pwdExpire=if(type="staff", | eval relative_time(_time, "+90day") , | eval relative_time(_ti... See more...
Is it possible to have the true and false parts of an if statement contain eval statements.     | eval pwdExpire=if(type="staff", | eval relative_time(_time, "+90day") , | eval relative_time(_time, "+180day") )     Desired results is: If type="staff" calculate pwdExpiry as _time + 90 days, else calculate pwdExpiry as _time + 180 days. I will then format pwdExpiry and display in a table.
Hi,  I want to restrict access to different teams based on hosts but don't want to do it by creating multiple indexes for this. The data would be present in one index and teams would be given access... See more...
Hi,  I want to restrict access to different teams based on hosts but don't want to do it by creating multiple indexes for this. The data would be present in one index and teams would be given access to this index, however they should be able to see only the data they own. Is there a way host-based restriction can be achieved? 
I am using ITSI's KPI-based search for text log monitoring. If the text logs match the search criteria, the flow is to send an alert via email. I would like to quote the contents of the text logs tha... See more...
I am using ITSI's KPI-based search for text log monitoring. If the text logs match the search criteria, the flow is to send an alert via email. I would like to quote the contents of the text logs that matched the detection criteria in the body of the email. Is it possible to implement such requirements with Splunk ITSI? If so, I would like to know the detailed content of the implementation. If not, I would like to know the reason why.
Hi All, Is there a way to retrieve a specific alert without using short ID in the incident review page? I was thinking of using "rule_id" field or "event_hash" of the alert, but couldn't be able to... See more...
Hi All, Is there a way to retrieve a specific alert without using short ID in the incident review page? I was thinking of using "rule_id" field or "event_hash" of the alert, but couldn't be able to pull the specific alert. Please suggest any other alternate method other than using short id. Thanks.
We are currently using a regex pattern to match events against our raw data, and it works perfectly when we use the search app. The pattern we are using is: C:\\Windows\\system32\\cmd\.exe*C:\\Progr... See more...
We are currently using a regex pattern to match events against our raw data, and it works perfectly when we use the search app. The pattern we are using is: C:\\Windows\\system32\\cmd\.exe*C:\\ProgramData\\Symantec\\Symantec Endpoint Protection\\14\.3\.8289\.5000\.105\\Data\\Definitions\\WebExtDefs\\20230830\.063\\webextbridge\.exe* However, when we try to use this regex pattern in a lookup table, the events are not being matched. This seems to be because of the wildcard in the pattern. Despite defining the field name in the lookup definition (e.g., WILDCARD(process)), it still doesn't match the events. I'm wondering if Splunk lookup supports wildcards within strings, or does it only support them at the beginning and end of strings? Any insights or guidance on this matter would be greatly appreciated. Regards VK
Hi, we are logging api requests in Splunk. I would like to create a sort of health check table where every column represents the status code of the last API call in previous 5 minutes. While each ... See more...
Hi, we are logging api requests in Splunk. I would like to create a sort of health check table where every column represents the status code of the last API call in previous 5 minutes. While each row is a different API. Here an example of what the output should be Any Idea how I could achieve that in Splunk? Each row represents a different API ( request.url), while the status code is stored in response.status Thank you
Hi All, Below is my query-   index=idx-sec-cloud sourcetype=rubrik:json NOT summary="*on demand backup*" (custom_details.eventName="Snapshot.BackupFailed" NOT (custom_details.errorId="Oracle.Rman... See more...
Hi All, Below is my query-   index=idx-sec-cloud sourcetype=rubrik:json NOT summary="*on demand backup*" (custom_details.eventName="Snapshot.BackupFailed" NOT (custom_details.errorId="Oracle.RmanStatusDetailsEmpty")) OR (custom_details.eventName="Mssql.LogBackupFailed") OR (custom_details.eventName="Snapshot.BackupFromLocationFailed" NOT (custom_details.errorId="Fileset.FailedDataThresholdNas" OR custom_details.errorId="Fileset.FailedFileThresholdNas" OR custom_details.errorId="Fileset.FailedToFindFilesNas")) OR (custom_details.eventName="Vmware.VcenterRefreshFailed") OR (custom_details.eventName="Hawkeye.IndexOperationOnLocationFailed") OR (custom_details.eventName="Hawkeye.IndexRetryFailed") OR (custom_details.eventName="Storage.SystemStorageThreshold") OR (custom_details.eventName="ClusterOperation.DiskLost") OR (custom_details.eventName="ClusterOperation.DiskUnhealthy") OR (custom_details.eventName="Hardware.DimmError") OR (custom_details.eventName="Hardware.PowerSupplyNeedsReplacement") | eventstats earliest(_time) AS early_time latest(_time) AS late_time     I am trying to write a condition that excludes all custom_details.eventName="Mssql.LogBackupFailed" (check line 3) unless it the count is greater than 2. Now sure how to proceed.   Thanks in advance.
Hello, I would like to pass one of the columns/fields from one of my Splunk search results as an input to a lambda function using "Add to SNS alert" action option.  The value I am trying to pass is... See more...
Hello, I would like to pass one of the columns/fields from one of my Splunk search results as an input to a lambda function using "Add to SNS alert" action option.  The value I am trying to pass is a url, which I have saved under field "url" and in the action options, am entering the AWS account name, topic name but am unable to understand how/what exactly should I pass in the message field. Currently, this is what I have set the message field to $result.url$, but its not working. Please can someone help me with this.
hello after I upgraded Splunk to the 9.1.1 version, some parts of the overview page in the distributed monitoring console were not completed and were empty. other tabs in the distributed monitoring ... See more...
hello after I upgraded Splunk to the 9.1.1 version, some parts of the overview page in the distributed monitoring console were not completed and were empty. other tabs in the distributed monitoring console do not have any problems. Does anyone have any idea what is the problem? Thanks
Hello, I am attempting to Splunk search via Macro. When using the Splunk search UI, the relevant information comes up, however, despite using the same user, the API search doesn’t get the same resul... See more...
Hello, I am attempting to Splunk search via Macro. When using the Splunk search UI, the relevant information comes up, however, despite using the same user, the API search doesn’t get the same results after further investigation, when looking for the Macro on the website I can see it, but not via the API. What could be causing this?
I edited the conf files on my local server before deploying, so I know they are all identical. I have 5 servers. I copied the Splunk_TA_nix folder to apps. 3 of the 5 have data showing up for the ... See more...
I edited the conf files on my local server before deploying, so I know they are all identical. I have 5 servers. I copied the Splunk_TA_nix folder to apps. 3 of the 5 have data showing up for the new "os" index. splunkd.log, in fact the whole splunk/log folder, didn't have any errors. But it also didn't have any mention of "idx=os" on the missing servers. I ran some of the scripts in Splunk_TA_nix/bin in debug mode. No errors. What log file or index do I check to debug the issue?
Hello everyone, I'm working on a project ''Splunk Enterprise: An organization's go-to in detecting cyber threats''  please how/where can I get datasets and logs that I will use for my project.
Hi I'm currently working on obtaining Windows Filtering Platform event logs to identify the user responsible for running an application. My goal is to enhance firewall rules by considering both the ... See more...
Hi I'm currently working on obtaining Windows Filtering Platform event logs to identify the user responsible for running an application. My goal is to enhance firewall rules by considering both the application and the specific user. To achieve this, I've set up a system to send all logs to Splunk, which is already operational. However, I've encountered an issue with WFP event logs not displaying the authorized principal user who executed the application. This absence of user information makes it challenging to determine who used what application before I can further refine the firewall rules. If you have any insights or suggestions on how to address this issue, I would greatly appreciate your assistance. I can readily access various details such as destination, source, port, application, and protocol, but the missing username is a crucial piece of information I need. Thank you for any guidance you can provide.
Greetings,   I am struggling with creating a table in splunk which would do the following transformation: Find the discrete count of id for A, B, and C where value the is 1, by month.  Curren... See more...
Greetings,   I am struggling with creating a table in splunk which would do the following transformation: Find the discrete count of id for A, B, and C where value the is 1, by month.  Currently, I am calculating values for each column individually using eventstats and combining the results. However, we have a lot of columns (a,b,c,d.....) and thus the SLP does not preform efficiently.    Looking for a more efficient approach to this.   Thanks in advance!
Hello guys!,   I have a month trying to forward my logs from iMacs using the UF with the following format:       Resources,line,data,process 2023-09-30T06:35:02,"Scanned disks....... " 2023-09-... See more...
Hello guys!,   I have a month trying to forward my logs from iMacs using the UF with the following format:       Resources,line,data,process 2023-09-30T06:35:02,"Scanned disks....... " 2023-09-30T06:35:02,User: ...... 2023-09-30T06:35:02,........... 2023-09-30T06:35:02,............ 2023-09-30T06:35:02,Time of completion: ..........       but when the log get into Splunk it only shows the first row:     Resources,line,data,process       and the rest of the log reaches splunk 6 hours later.   I've added the following rule in props.conf but it still failling. path: /Applications/SplunkForwarder/etc/system/local/props.conf        [name_of_my_sourcetype] CHARSET=UTF-8 TIME_FORMAT=%Y-%m-%dT%H:%M:%S, TIME_PREFIX=^ LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TZ=America/Mexico_City disabled=false     Every change I made I always restart the splunk forwarder using ./splunk restart I have no access to the Splunk server (SSH) but if needed I could try to make some configurations but I do not where.