All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Folks, Good Morning to one and all, I have Trend Micro Cloud one service, and i want to integrate those service with Splunk instance which has been placed on cloud. Kindly suggest the mechan... See more...
Hello Folks, Good Morning to one and all, I have Trend Micro Cloud one service, and i want to integrate those service with Splunk instance which has been placed on cloud. Kindly suggest the mechanism for this, as i have checked there is no add on available for this. As i know trend Micro Cloud one have the ability to forward the logs via Syslog mechanism & the Splunk instance on cloud, then what will be the Splunk interface for syslog on cloud for this integration. Please share your opinion on this.   Regards, Gautam Khillare(GK)
Splunk seems to have a problem with authenticating a SAML user account using a token. The purpose of using token authentication is to allow an external application to run a search and get the result... See more...
Splunk seems to have a problem with authenticating a SAML user account using a token. The purpose of using token authentication is to allow an external application to run a search and get the results. A sample script is posted on GitHub as a code gist — the script simply starts a search but does not wait for the results. The problem is that when token authentication is used with a SAML account, it only works when that SAML user is logged in on the Splunk web GUI and while the interactive session is (still) valid. The problem is shown in the internal log:   07-03-2023 19:35:53.931 +0000 ERROR Saml [795668 AttrQueryRequestExecutorWorker-0] - No status code found in SamlResponse, Not a valid status. 07-03-2023 19:35:53.901 +0000 ERROR Saml [795669 AttrQueryRequestExecutorWorker-1] - No status code found in SamlResponse, Not a valid status.   The theory on the failure is: The token authentication works with (within) Splunk; But Splunk needs to perform RBAC after authentication. So it does AQR after the authentication; However, when there is no valid, live SAML session, the AQR fails. (AQR = Attribute Query Request) -- in this case, to get the user's group memberships to map to Splunk roles. I wonder if anyone has been able to get token authentication to work for a SAML account? [Edit]: On the other hand, is it simply impossible to use token authentication with a SAML user account?
Hi    I want to know that what will happen after splunk universal forwarder reached throughput limit, because i found my universal forwarder is stop ingest the data at a certain monment every day, a... See more...
Hi    I want to know that what will happen after splunk universal forwarder reached throughput limit, because i found my universal forwarder is stop ingest the data at a certain monment every day, and i don't know waht happend here, and i just set up the thruput in limits.conf, and restart the UF, the remain data will be collected,  although i'm not sure if it will still be effective next time... so the throughput limit reached, the Splunk UF will stop collecting data until next restart?   
I need to run a curl command to run various tasks such as creating searches, accessing searches etc. I have the below command which works perfectly   curl -k -u admin:test12345 https://127.0.0.1:8... See more...
I need to run a curl command to run various tasks such as creating searches, accessing searches etc. I have the below command which works perfectly   curl -k -u admin:test12345 https://127.0.0.1:8089/services/saved/searches/ \ -d name=test_durable \ -d cron_schedule="*/15 * * * *" \ -d description="This test job is a durable saved search" \ -d dispatch.earliest_time="-15h@h" -d dispatch.latest_time=now \ --data-urlencode search="search index=_audit sourcetype=audittrail | stats count by host"   but given that I may have to craft various curl commands with different -d flags, I want to be able to pass values through a file so I used below command   curl -k -u admin:test12345 https://127.0.0.1:8089/services/saved/searches/ --data-binary data.json   where data.json looks like this { "name": "test_durable", "cron_schedule": "*/15 * * * *", "description": "This test job is a durable saved search", "dispatch.earliest_time": "-15h@h", "dispatch.latest_time": "now", "search": "search index=_audit sourcetype=audittrail | stats count by host" } but in doing so I get following error   <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Cannot perform action "POST" without a target name to act on.</msg> </messages> </response>   So after going through lot of different posts on this topic, I realised Splunk seems to have problem with json format or mainly extracting the 'name' attribute from json format. Can someone please assist with how I can craft Curl command that uses data from a file like I am using above and get correct response from Splunk ?
Hello, I was aware that splunk is very versatile application which allows the users to manipulate the data is many ways.  I have extracted the fields of event_name, task_id , event_id. I am trying t... See more...
Hello, I was aware that splunk is very versatile application which allows the users to manipulate the data is many ways.  I have extracted the fields of event_name, task_id , event_id. I am trying to create an alert if there is an increment in the event_id for the same task_id & event_name when latest even arrives in the splunk. For example, event at 3:36:40.395 PM have the task_id which is 3  & event_id which is 1223680  AND  the latest even arrived at 3:52:40.395 PM which have task_id 3 & event_id which is 1223681 I am trying to create an alert because for the same task_id (3), event_name (server_state) there is an increment in event_id. I believe it is only possible if we store the previous event_id in a variable for the same event_name & task_id so that we can compare it with the new event_id. However, we have four different task_id, I am not sure how save the event_id for all those different task_id's. Any help would be appreciated.   Log File Explanation:   8/01/2023 3:52:40.395 PM server_state|3 1123681 5 Date Timestamp event_name|task_id event_id random_number     Sample Log file:   8/01/2023 3:52:40.395 PM server_state|3 1223681 5 8/01/2023 3:50:40.395 PM server_state|2 1201257 3 8/01/2023 3:45:40.395 PM server_state|1 1135465 2 8/01/2023 3:41:40.395 PM server_state|0 1545468 5 8/01/2023 3:36:40.395 PM server_state|3 1223680 0 8/01/2023 3:25:40.395 PM server_state|2 1201256 2 8/01/2023 3:15:40.395 PM server_state|1 1135464 3 8/01/2023 3:10:40.395 PM server_state|0 1545467 8     Thank You
I audit windows computers. My search looks for the date, time, EventCode and Account_Name:   Date                        Time            EventCode  Account_Name 2023/08/29       16:09:30     4624 ... See more...
I audit windows computers. My search looks for the date, time, EventCode and Account_Name:   Date                        Time            EventCode  Account_Name 2023/08/29       16:09:30     4624                   jsmith   I would like the Time field to turn red when a user signs in after hours (1800 - 0559). I have tried clicking on the pen in the time column and selecting Color than Ranges. I always get error messages about not putting the numbers in correct order. What do I need to do?
Hello,  I've been attempting to use the results of a sub-search as input for the main search with no luck. I'm getting no results. Based on the query below, I was thinking of getting the field value... See more...
Hello,  I've been attempting to use the results of a sub-search as input for the main search with no luck. I'm getting no results. Based on the query below, I was thinking of getting the field value of Email_Address  from the sub-search and passing the result to the main search (in my mind only the Email_Address value). Finally, thinking the main search now has the resulting values from the sub-search (the Email_Address field), it then runs the main search using the passed value (Email_Address) as a search criteria to find events from another index. Is that the correct way to pass values as a searchable value or am I wrong? If I'm wrong, how can I do this? I thank you all in advance for your assistance!  index=firstindex Email_Address [search index=secondindex user="dreamer"      | fields Email_Address      | head 1 ] |table Date field1 field2 Email_Address
We are noticing that that same data received via the HTTP Event Collector is not searchable by Field like data received via our Forwarders. Note how EventName field IS NOT being picked up from Event... See more...
We are noticing that that same data received via the HTTP Event Collector is not searchable by Field like data received via our Forwarders. Note how EventName field IS NOT being picked up from Event received through HEC:   Note how EventName IS getting picked up from Event received through Forwarders:   It seems that the events received through the HEC are treated as one large Blob of data and are not parsed or indexed the same way by Splunk. I there anything that can be done in the request to the HEC or to an indexer to resolve this? Thanks.      
As an app add-on creater we don't have control on the Indexes available on the Splunk Cloud on user environment.  In App's Input.config we set the index= default.   In the add-on flow.. we add a dat... See more...
As an app add-on creater we don't have control on the Indexes available on the Splunk Cloud on user environment.  In App's Input.config we set the index= default.   In the add-on flow.. we add a data input  configuration ..with new input stream urls and  the index, it should point to.. shown in the below image    As you see the  index is populated with "default" How can we enable the ability to show up all available indexes in the drop down If the desired index is not in the available list .. how can we enable user to  input  a string and trigger a search If user don't want to pick a index then.. the default should be selected 
Hello All, How to create dependent dropdown based on saved search I am using a saved search but when I add: |search command, then wont work. Please suggest.   Thanks
I created a lookup table for blacklisted DNS queries. I need a query that uses the lookup table to see if domains in the lookup table are present in events in my environment. 
Hello all, please could you help me with one question - it is possible to add an png image on a rectangle square? Just as an example the rectangle is set like this - it is possible to include there... See more...
Hello all, please could you help me with one question - it is possible to add an png image on a rectangle square? Just as an example the rectangle is set like this - it is possible to include there an image to the corner of the rectangle? <a href=""> <g> <rect style=fill:color_grey width="150" height="90" x=1200 y=200/> </g> </a>   Thank you for any help and answers.
I want to add three fields insert ,update and error then subtract it from count_carmen and add new row .
Hi Experts, I would like rename sourcetype at index time with below config. props.conf [source::test/source.txt] TRANSFORMS-sourcetype = newsourcetype Transforms.conf [newsourcetype] SO... See more...
Hi Experts, I would like rename sourcetype at index time with below config. props.conf [source::test/source.txt] TRANSFORMS-sourcetype = newsourcetype Transforms.conf [newsourcetype] SOURCE_KEY = MetaData:Sourcetype REGEX = regex to match existing sourcetype FORMAT = newsourcetype DEST_KEY = MetaData:Sourcetype   Now I would like apply below settings on new sourcetype.  [newsourcetype] TZ= Linebreaker= Truncate= etc.. will it work this way ? Please let me know.   Thanks. Ram    
we have a data of 14k events under event index, which is unstructured. I'm trying to ingest this data under a metric index at search time using mcollect command and was able to convert the event logs... See more...
we have a data of 14k events under event index, which is unstructured. I'm trying to ingest this data under a metric index at search time using mcollect command and was able to convert the event logs to metrics. As per the splunk docs, it states metric index is optimized for the storage and retrieval of metric data. While there is improvement in the search time, the storage size instead of decreasing it drastically increased. How does the storage is optimized incase of metric index? Is there any additional configuration that needs to e setup. I have updated the always_use_single_value_output for mcollect command to false under limits.conf
Hi, I have setup an environment to learn at home. I have 2 instances, one serving as a Splunk Forwarder where I have my data and the other serving as Deployment Server + indexer + search head. I ... See more...
Hi, I have setup an environment to learn at home. I have 2 instances, one serving as a Splunk Forwarder where I have my data and the other serving as Deployment Server + indexer + search head. I configured the serverclass and the app, however I'm not getting data into the index from the forwarder even tho I checked the logs in the latter and the connection is successful. Is it because of the trial license? Any thoughts why is it not working as expected? Any info would be appreciated. Thanks.
Hello, How to query a field in DBXQuery that contains colon?   I ran the following query and got an error.  Thank you  | dbxquery connection=visibility query="select abc:def from tableCompany" or... See more...
Hello, How to query a field in DBXQuery that contains colon?   I ran the following query and got an error.  Thank you  | dbxquery connection=visibility query="select abc:def from tableCompany" org.postgresql.util.PSQLException: ERROR: syntax error at or near ":" Position: I tried to put single quote | dbxquery connection=visibility query="select 'abc:def' from tableCompany" but it gave me the following result ?column? abc:def abc:def
I'm facing a rather peculiar issue with dashboards. When non-admin users, or users without the admin_all_objects capability, access the dashboard, all panels display "Waiting for data..." indefinitel... See more...
I'm facing a rather peculiar issue with dashboards. When non-admin users, or users without the admin_all_objects capability, access the dashboard, all panels display "Waiting for data..." indefinitely. However, the strangest part is that if the user clicks on the search of a panel and is redirected to the search view, the results appear immediately. Here's what I've tried so far: Searched through community questions and issues, but found nothing that matches this issue exactly. Experimented with different capabilities, but it seems only the admin_all_objects capability solves this issue. Attempted to adjust the job limits similar to those set for admin users. Assigning admin_all_objects capability to all users is not a viable solution for me due to security concerns. Has anyone encountered this issue before? I'm running out of ideas and would appreciate any help or insights on this. Note: Tested also on a local instance deployed via ansible-role-for-splunk to reproduce.   Thank you in advance for your time and assistance.
Hello, We are new to the Splunk environment, and are using Enterprise v9.01. We have  complete driver package from CData that allows us to use 100+ different ODBC and JDBC drivers. I tried the Splu... See more...
Hello, We are new to the Splunk environment, and are using Enterprise v9.01. We have  complete driver package from CData that allows us to use 100+ different ODBC and JDBC drivers. I tried the Splunk DB connect add-on and I can connect to a SQL DB. Can Splunk actually make connections to other JDBC/ODBC data sources, MongoDB, Teams, One-note etc from CData. Please let us know.    
Hello, we are a from a software editor integration team and we would like to help our customer to integrate easily our logs in their splunk. So we developped a python script using your samples and ... See more...
Hello, we are a from a software editor integration team and we would like to help our customer to integrate easily our logs in their splunk. So we developped a python script using your samples and our own python script to access our Audit trail API. The current script is working well outside splunk and retrieve our logs/ as soon as there are new indexes and forward the json result to stdout. But as soon as we put it inside Splunk we have "ERROR ExecProcessor" errors which are not very self explanatory. ----------------------------------- 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from... ...bin\scripts\Final-2.py"", line 57, in <module> 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" response = requests.get(url, headers={'Content-Type': 'application/json'}, cert=cert_context, verify = False) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from ... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\api.py", line 76, in get 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" return request('get', url, params=params, **kwargs) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\api.py", line 61, in request 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" return session.request(method=method, url=url, **kwargs) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\sessions.py", line 542, in request 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" resp = self.send(prep, **send_kwargs) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\sessions.py", line 655, in send 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" r = adapter.send(request, **kwargs) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\adapters.py", line 416, in send 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" self.cert_verify(conn, request.url, verify, cert) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\adapters.py", line 250, in cert_verify  It seems our script is refused at the line  response = requests.get(url, headers={'Content-Type': 'application/json'}, cert=cert_context, verify = False) We tried with or without verify = False with no clues why its refused. Did you have any ideas about why it's stuck inside Splunk ? (we tried in Linux and in Windows with the same Result) Best regards, TrustBuilder team