All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi   We have a splunk installation with smart store enabled. We have plenty of cache on disk, so we are no near the space padding setting. I have seen bucket downloads from the S3, and I did not e... See more...
Hi   We have a splunk installation with smart store enabled. We have plenty of cache on disk, so we are no near the space padding setting. I have seen bucket downloads from the S3, and I did not expect that. So my question is, do Splunk pre-emptive evict buckets, even if there are enough space ? I se no documentation that states it does anything else than LRU.   Regards André
I want to deploy a single Splunk collector in my AWS ECS cluster which will 1. Collect all resource metrics for other running tasks within the same cluster 2. Receive, process and forward all custo... See more...
I want to deploy a single Splunk collector in my AWS ECS cluster which will 1. Collect all resource metrics for other running tasks within the same cluster 2. Receive, process and forward all custom OT metrics sent to it by the applications themselves.   Is this possible?   Thanks
I created a role with the capabilities 'edit_license' and 'edit_user', but I didn't receive all the users from the GET request to the URL: /services/authentication/users?output_mode=json. It only ret... See more...
I created a role with the capabilities 'edit_license' and 'edit_user', but I didn't receive all the users from the GET request to the URL: /services/authentication/users?output_mode=json. It only returned part of the users. Without the role 'edit_license', I received the following error: "messages": [ { "type": "ERROR", "text": "Unauthorized" } ] What are the minimum permissions required to retrieve all users, and does anyone know if this is the same for Splunk Cloud?  
Hello all, I am trying to ingest metrics via Opentelemetry in an enterprise environment. I have installed the Splunk Add-On for OpenTelemetry Collector , which according to the documentation is comp... See more...
Hello all, I am trying to ingest metrics via Opentelemetry in an enterprise environment. I have installed the Splunk Add-On for OpenTelemetry Collector , which according to the documentation is compatibl I have some doubts to configure it: where can you know the following connection points that my enterprise environment has? - SPLUNK_API_URL: The Splunk API URL, e.g. https://api.us0.signalfx.com  - SPLUNK_INGEST_URL: The Splunk ingest URL, e.g. https://ingest.us0.signalfx.com  - SPLUNK_LISTEN_INTERFACE: The network interface the agent receivers listen on.¿? - SPLUNK_TRACE_URL: The Splunk trace endpoint URL, e.g. https://ingest.us0.signalfx.com/v2/trace  Is there a configuration file where to view it? Do I have to do some step before to get those services up? thanks in advance BR JAR   T
Hello,   The UI of my search head is not loading ...I am seeing only a white screen with no error message as such ..Splunkd  is also running ...Kindly suggest?
I am a beginner in splunk and I have created a new app in the Splunk Enterprise.I am not able to see appserver folder in the newly created app? How can I add that directory?
Erro message: Unable to load app list. Refresh the page to try again. Can anyone help with this?
Good All I am new in Splunk, and I am currently having problem at startup. How do I switch to Free from Enterprise Trial License?      
In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText. EX: eventSource statusCode statusText bulkDelete 1020 3031:... See more...
In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText. EX: eventSource statusCode statusText bulkDelete 1020 3031: No Card found with the identifier for the request   But my query is getting "has exceeded configured match_limit, consider raising the value in limits.conf." after using fields extraction.         index = xxx sourcetype=xxx "Publish message on SQS" | search bulkDelete | rex field=_raw "(?ms)^(?:[^:\\n]*:){7}\"(?P<error_bulkDelete>[^\"]+)(?:[^:\\n]*:){2}\"(?P<error_errorCode>[^\"]+)[^:\\n]*:\"(?P<error_desc>[^\"]+)(?:[^:\\n]*:){6}\\\\\"(?P<error_statusText>[^\\\\]+)" offset_field=_extracted_fields_bounds       Target log:     Publish message on SQS, queueName=xxx, retryCount=0, message={"traceId":"xxx1112233","clientContext":"xxxxxclientContext","cardTokenReferenceId":"xxxcardTokenReferenceId","eventSource":"bulkDelete","errors":[{"errorCode":"52099","errorDescription":"Feign Client Exception.","retryCategory":"RETRYABLE","errorDetails":"{\"clientContext\":\"xxxxxclientContext\",\"ewSID\":\"xxxxSID\",\"statusCode\":\"1020\",\"statusText\":\"3031: No Card found with the identifier for the request\",\"timestampISO8601\":\"2024-04-05T00:00:26Z\"}"}]}       I checked similar posts, they suggested to use non-greedy? So I tried:         index = "xxx" sourcetype=xxx "Publish message on SQS*" bulkDelete | rex field=_raw "\"statusText\":\s*\"(?P<statusText>[^\"]+)\"" | where NOT LIKE( statusText, "%Success%")       If I add "| table", I will get blank content on statusText
We would like to be able to configure the Okta application to be of a "API Services " application type vs a "Web Application" type when setting up the "Splunk Add-on for Okta Identiy Cloud TA" for OA... See more...
We would like to be able to configure the Okta application to be of a "API Services " application type vs a "Web Application" type when setting up the "Splunk Add-on for Okta Identiy Cloud TA" for OAuth2. When using a "Web Application" type, this requires a user account associated with the auth flow. This ties the auth to a specific user which, if the user is suspended or disabled, the TA stops working. Ideally this is not tied to a user, but to an "API Services" application type. Okta recommends the "API Services" application type to be used for machine to machine auth. Are there plans to support this in the add on going forward since "Web Application" type is less robust and not what Okta ideally recommends?
Due to some oddities of our environment, my team needs default fields in order to run some playbooks automatically. We've built these fields into the notable events which get sent over from Splunk. H... See more...
Due to some oddities of our environment, my team needs default fields in order to run some playbooks automatically. We've built these fields into the notable events which get sent over from Splunk. However, containers are built without an artifact when created manually. While we could certainly train people to follow some manual steps to create an artifact or toggle the Artifact Dependency switch, that goes against the nature of SOAR and it's easy to miss something. It's easier to have a playbook create an artifact with those fields we need. Unfortunately, the Artifact Dependency switch defaults to off. So, the actual question: Has anyone found a way to change the default for the Artifact Dependency switch or to make a playbook run before an artifact is created?
{"id":"0","severity":"Information","message":[{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02170","TOTAL":"121257","PROCESSED":"121257","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"35 (0%)","ERROR_IND"... See more...
{"id":"0","severity":"Information","message":[{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02170","TOTAL":"121257","PROCESSED":"121257","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"35 (0%)","ERROR_IND":"0","ERROR_1":"0"},{"TARGET_SYSTEM":"CPW","FUNCTION_NAME":"CPW_02171","TOTAL":"26434","PROCESSED":"26434","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"},{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02172","TOTAL":"23343","PROCESSED":"2647812","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"}]} I want to extract all fields in the form of table from  "message" which is holding JSON array . And I want a total row for each column where total running total will display for each numeric column based on TARGET_SYSTEM . 
message: Updated Components { "servicechannel": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ], "omni... See more...
message: Updated Components { "servicechannel": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ], "omnisupervisorconfig": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:17:37.000Z", "ComponentName": "OmniSupervisorConfig", "RecordId": "0Q27X000000KyrESAS" } ], "livechatbutton": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:31:35.000Z", "ComponentName": "LiveChatButton", "RecordId": "5638X000000Xw55QAC" } ] }   LastModifiedBy ModifiedBy Component RecordId
I know vCenter have API get information about local file system on the guest vm running on ESXi host (as long as VMWare Tools are installed on vm): capacity (in bytes) freeSpace (in bytes) diskPa... See more...
I know vCenter have API get information about local file system on the guest vm running on ESXi host (as long as VMWare Tools are installed on vm): capacity (in bytes) freeSpace (in bytes) diskPath (eg. C:\ for Win or / for *nix) fileSystemType (eg. ext3, NTFS, etc...) Ref #1: https://vdc-download.vmware.com/vmwb-repository/dcr-public/184bb3ba-6fa8-4574-a767-d0c96e2a38f4/ba9422ef-405c-47dd-8553-e11b619185b2/SDK/vsphere-ws/docs/ReferenceGuide/vim.vm.GuestInfo.DiskInfo.html Ref #2: https://developer.vmware.com/apis/vsphere-automation/latest/vcenter/api/vcenter/vm/vm/guest/local-filesystem/get/ I believe RVTools, and some monitoring tools are using this specific api to grab info about local file system on the guest vm.   So far I was able to find metrics regarding datastore usage. This is fine, but equally important metric is local disk utilization of the guest vm. Which metric is responsible for getting this info in VMWare or VMWare Metrics AddOns? https://docs.splunk.com/Documentation/AddOns/released/VMW/Sourcetypes https://docs.splunk.com/Documentation/AddOns/released/VMWmetrics/Sourcetypes If none of the listed. Is there a way to customize VMW or VMWmetrics AddOns to grab this crucial information about VMs from vCenter? Perhaps I should look elsewhere - I mean different App/AddOn?
Greetings!    Im unable to start appdynamics-machine-agent following the same install instructions that work with rhel 7. Machine Agent Bundle - 64-bit linux (rpm)    24.3.0 installed. I update... See more...
Greetings!    Im unable to start appdynamics-machine-agent following the same install instructions that work with rhel 7. Machine Agent Bundle - 64-bit linux (rpm)    24.3.0 installed. I updated the config file to match the same controller/settings/etc as the rhel 7 servers. Upon starting the service I see the status is failed, and the logs say: Could not initialize class com.sun.jna.Native   /opt/appdynamics/machine-agent/logs/startup.out OUTPUT  2024-04-16 11:15:53.430 Using Agent Version [Machine Agent v24.3.0.4127 GA compatible with 4.4.1.0 Build Date 2024-03-20 05:00:40] ERROR StatusLogger Reconfiguration failed: No configuration found for '10dba097' at 'null' in 'null' 2024-04-16 11:15:55.037 [INFO] Agent logging directory set to: [/opt/appdynamics/machine-agent/logs] 2024-04-16 11:15:53.468 Could not start up the machine agent due to: Could not initialize class com.sun.jna.Native 2024-04-16 11:15:53.468 Please see startup.log in the current working directory for details.   /opt/appdynamics/machine-agent/startup.log OUTPUT Tue Apr 16 11:15:55 CDT 2024 java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native at oshi.jna.platform.linux.LinuxLibc.<clinit>(LinuxLibc.java:22) at oshi.software.os.linux.LinuxOperatingSystem.<clinit>(LinuxOperatingSystem.java:97) at oshi.hardware.platform.linux.LinuxCentralProcessor.initProcessorCounts(LinuxCentralProcessor.java:166) at oshi.hardware.common.AbstractCentralProcessor.<init>(AbstractCentralProcessor.java:65) at oshi.hardware.platform.linux.LinuxCentralProcessor.<init>(LinuxCentralProcessor.java:57) at oshi.hardware.platform.linux.LinuxHardwareAbstractionLayer.createProcessor(LinuxHardwareAbstractionLayer.java:43) at oshi.util.Memoizer$1.get(Memoizer.java:61) at oshi.hardware.common.AbstractHardwareAbstractionLayer.getProcessor(AbstractHardwareAbstractionLayer.java:48) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.getOshiBasedLicenseCpuInfo(MachineLicensePropertiesProvider.java:75) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.getLicenseCpuInfo(MachineLicensePropertiesProvider.java:44) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.get(MachineLicensePropertiesProvider.java:106) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.get(MachineLicensePropertiesProvider.java:25) at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:86) at com.google.inject.internal.BoundProviderFactory.provision(BoundProviderFactory.java:72) at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:60) at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:59) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) at com.google.inject.internal.RealMultibinder$RealMultibinderProvider.doProvision(RealMultibinder.java:253) at com.google.inject.internal.RealMultibinder$ExtensionRealMultibinderProvider.doProvision(RealMultibinder.java:307) at com.google.inject.internal.RealMultibinder$ExtensionRealMultibinderProvider.doProvision(RealMultibinder.java:289) at com.google.inject.internal.InternalProviderInstanceBindingImpl$Factory.get(InternalProviderInstanceBindingImpl.java:113) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60) at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300) at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:58) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:169) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60) at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:169) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) at com.google.inject.internal.InternalInjectorCreator.loadEagerSingletons(InternalInjectorCreator.java:213) at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:186) at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:113) at com.google.inject.Guice.createInjector(Guice.java:87) at com.google.inject.Guice.createInjector(Guice.java:69) at com.appdynamics.voltron.FrameworkBootstrap.createInjector(FrameworkBootstrap.java:107) at com.appdynamics.voltron.FrameworkBootstrap.start(FrameworkBootstrap.java:162) at com.appdynamics.voltron.FrameworkBootstrap.startAndRun(FrameworkBootstrap.java:120) at com.appdynamics.voltron.FrameworkApplication.start(FrameworkApplication.java:31) at com.appdynamics.agent.sim.Main.startSafe(Main.java:64) at com.appdynamics.agent.sim.bootstrap.Bootstrap.main(Bootstrap.java:48)
Hi Can anyone please suggest where I can submit a bug report for dashboard visualisations? Thanks
I need to report hosts that are configured to receive app.log details and also report the ones that are missing. For this, I use the query "index=application sourcetype="application:server:log" | sta... See more...
I need to report hosts that are configured to receive app.log details and also report the ones that are missing. For this, I use the query "index=application sourcetype="application:server:log" | stats values(host) as hostnames by customer_name". This retrieves the hostnames for each customer_name from the sourcetype.   I get a result as: customer_name host customer1 server1 customer2 server2 server3   Then, I join the result by customer_name field from the second part of the query "[| inputlookup server_info.csv | rename customer as customer_name | stats values(host) as hostnames_lookup by customer_name] which retrieves the hostnames for each customer_name from the server_info.csv lookup table."   Here I get result as: customer_name host customer_name host customer1 server1 server100 customer2 server2 server3 server101 Later, I expand both the multivalue fields and perform evaluation on both the fields to retrieve result as configured or not configured. The evaluation looks like this | mvexpand hostnames | mvexpand hostnames_lookup | eval not_configured = if(hostnames == hostnames_lookup, hostnames, null()) | eval configured = if(hostnames != hostnames_lookup, hostnames, null()) | fields customer_name, hostnames, hostnames_lookup, configured, not_configured   My final query looks like this: (index=application sourcetype="application:server:log) | stats values(host) as hostnames by customer_name | join customer_name [| inputlookup server_info.csv | rename customer as customer_name | stats values(host) as hostnames_lookup by customer_name] | mvexpand hostnames | mvexpand hostnames_lookup | eval not_configured = if(hostnames == hostnames_lookup, hostnames, null()) | eval configured = if(hostnames != hostnames_lookup, hostnames, null()) | fields customer_name, hostnames, hostnames_lookup, configured, not_configured However, in the result when the evaluation is completed the results are not as expected, the matching logic doesn't work and the resultant output is incorrect. There are no values evaluated in the not_configured column and the configured column only returns the values in hostnames. However, I'd expect the configured field to show results of all the servers configured to receive app.log and not configured to have hostnames that are present in lookup but are still not configured to receive logs.  Expected Output: customer_name hostnames hostnames_lookup configured not_configured customer1 server1 server1 server100 server1 server100 customer2 server2 server3 server2 server3 server101 server2 server3 server101   Current Output: customer_name hostnames hostnames_lookup configured not_configured customer1 server1 server1 server100 server1   customer2 server2 server3 server2 server3 server101 server2 server3     Essentially customer1 and customer2 should display server1 as configured and server100 not_configured and likewise for customer2 as mentioned in expected output table. Which will mean that server100 and 101 are part of the lookup but are not configured to receive app.log How can I evaluate this differently, so that the comparison works as expected. Is it possible to compare the values in this fashion? Is there anything wrong with the current comparison logic? Should I not use mvexpand on the extracted fields so that they are compared expectedly?        
We have an issue with long JSON log events, which is longer than console width limit - they are splitted to 2 separate events, each of them is not a correct JSON. How to handle it correctly? Is it po... See more...
We have an issue with long JSON log events, which is longer than console width limit - they are splitted to 2 separate events, each of them is not a correct JSON. How to handle it correctly? Is it possible to restore broken messages on splunk side, or we need to reach logger to know about width limitation and chunk messages in a proper way? How to handle large JSON events?
Hello! I would like to run a search which would display all information regarding entities and services. For example, for Entities where could I find information stored for: Entity Description,... See more...
Hello! I would like to run a search which would display all information regarding entities and services. For example, for Entities where could I find information stored for: Entity Description, Entity Information Field, Entity Title. For Services, where could I find information stored for: Service Description, Service Title, Service Tags What type of search query could I run to find this information? Thanks,
Thanks in Advance. I have four inputs Time,Environment,Application Name and Interface Name and two panels one is fiance and bank.Both panels has different applications name and interface names.So i ... See more...
Thanks in Advance. I have four inputs Time,Environment,Application Name and Interface Name and two panels one is fiance and bank.Both panels has different applications name and interface names.So i tried to use depends and reject in the inputs.If i change one panel to another the inputs like dropdown and text box remains same but the values need to be change as per the panels. <row> <panel id="panel_layout"> <input id="input_link_split_by" type="link" token="tokSplit" searchWhenChanged="true"> <label></label> <choice value="Finance">OVERVIEW</choice> <choice value="BankIntegrations">BANKS</choice> <default>OVERVIEW</default> <initialValue>OVERVIEW</initialValue> <change> <condition label="Finance"> <set token="Finance">true</set> <unset token="BankIntegrations"></unset> </condition> <condition label="BankIntegrations"> <set token="BankIntegrations">true</set> <unset token="Finance"></unset> </condition> <row> <panel> <input type="time" token="time" searchWhenChanged="true"> <label>Time Interval</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="env" searchWhenChanged="true"> <label>Environment</label> <choice value="*">ALL</choice> <choice value="DEV">DEV</choice> <choice value="TEST">TEST</choice> <choice value="PRD">PRD</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="applicationName" searchWhenChanged="true" depends="$Finance$" rejects="$BankIntegrations$"> <label>ApplicationName</label> <choice value="*">ALL</choice> <choice value="p-wd-finance-api">p-wd-finance-api</choice> <default>"p-wd-finance-api</default> <initialValue>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</initialValue> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>ApplicationName</fieldForValue> </input> <input type="text" token="InterfaceName" searchWhenChanged="true" depends="$Finance$" rejects="$BankIntegrations$"> <label>InterfaceName</label> <default></default> <initialValue></initialValue> </input> <input type="dropdown" token="applicationName" searchWhenChanged="true" depends="$BankIntegrations$" rejects="$Finance$"> <label>ApplicationName</label> <choice value="p-wd-finance-api">p-wd-finance-api</choice> <default>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</default> <initialValue>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</initialValue> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>ApplicationName</fieldForValue> </input> <input type="text" token="InterfaceName" searchWhenChanged="true" depends="$BankIntegrations$" rejects="$Finance$"> <label>InterfaceName</label> <default></default> <initialValue></initialValue> </input> </panel> </row>