All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I have an application hosted on a vendor GCP and the logs of the application are stored in the big query of GCP. I need to setup Splunk in my infrastructure to monitor the application hosted o... See more...
Hi I have an application hosted on a vendor GCP and the logs of the application are stored in the big query of GCP. I need to setup Splunk in my infrastructure to monitor the application hosted outside my infra(vendor GCP). Has anyone done something like this? Do you know how can I ingest the logs to Splunk Enterprise?
Hello, I am trying to add a search peer to our existing environment in order to scale it up a bit. The main instance is Splunk Enterprise which acts as the search head, indexer, and everything els... See more...
Hello, I am trying to add a search peer to our existing environment in order to scale it up a bit. The main instance is Splunk Enterprise which acts as the search head, indexer, and everything else pretty much. When I add the second Splunk Enterprise server that I set up as a peer under Distributed Search > Search Peers, everything stops working essentially on the main instance. Searches will never load and everything is extremely slow. This is when I add the 2nd new server as a peer on the main instance. I've tried adding it both ways and/or enabled on both but nothing seems to work.  My initial thoughts are maybe because the main instance isn't divided into multiple parts like a separate server for a search head, and then have the two indexers under that - but that seems much more complicated to set up than I want. Just looking to add a peer as another indexer type server to expand a bit. Any thoughts are appreciated Thanks 
Hi All, We are having issues with app from after the installation, no relevant data getting regarding the salesforce app. After the installation of this app, we have created the configuration and d... See more...
Hi All, We are having issues with app from after the installation, no relevant data getting regarding the salesforce app. After the installation of this app, we have created the configuration and data inputs, but we couldn't find any useful information or events. Splunk Base: https://splunkbase.splunk.com/app/5689 I have attached the screenshot of events which we are receiving now but those not actual events from salesforce. Kindly let me know if someone can help me to solve this issue. Any suggestions would be appreciated. Thanks in advance. 
Hello Guys, I'd like to create a search based on business hours, and like to use a field with value like this:  "2023/01/20 08:52:58" The bold number would be interesting, and like to search w... See more...
Hello Guys, I'd like to create a search based on business hours, and like to use a field with value like this:  "2023/01/20 08:52:58" The bold number would be interesting, and like to search with multiple values. example 08-18h [08,09,10,11,12,13,14,15,16,17,18] How could I find a regex to extract theese numbers?  thanks a lot!  
Hi, do you have a tentative timeline of when Splunk will deprecate Python 2 on the Splunk Cloud platform? Thanks.
Could you provide me a splunk enterprise security learning links from scratch  Zero to hero classes.. Thanks    
Hi guys, Happy New Year, i do some code testing with the Splunk HEC, now i need to transfer some large volum data with gzip compressed. 1. first i find one limit in $SPLUNK_HOME$/etc/system/defau... See more...
Hi guys, Happy New Year, i do some code testing with the Splunk HEC, now i need to transfer some large volum data with gzip compressed. 1. first i find one limit in $SPLUNK_HOME$/etc/system/default/limits.conf     [http_input] max_content_length = <integer> * The maximum length, in bytes, of HTTP request content that is accepted by the HTTP Event Collector server. * Default: 838860800 (~ 800 MB)     but i It is found that this value seems to calculate the size after decompression, because i have one test file about 50MiB, it's far less than 800MB, but when i sending request, Splunk raise the:     <!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><title>413 Content-Length of 838889996 too large (maximum is 838860800)</title></head><body><h1>Content-Length of 838889996 too large (maximum is 838860800)</h1><p>The request your client sent was too large.</p></body></html>     2. the 2nd limit i find at $SPLUNK_HOME$/etc/apps/splunk_httpinput/local/inputs.conf     [http] maxEventSize = <positive integer>[KB|MB|GB] * The maximum size of a single HEC (HTTP Event Collector) event. * HEC disregards and triggers a parsing error for events whose size is greater than 'maxEventSize'. * Default: 5MB     i think this limit is set at only one event size? if i send batch events in one request by "/services/collector", so this limit will apply to every event in the batch events, right? Are there any relevant experts to help confirm this behavior? if need more details feel free to let me know, Many thanks!
Hello, My events contain strings such as: notification that user "mydomain\bob" has notification that user "fred" has  notification that user "01\ralph2" has  I'm trying to write a conditional E... See more...
Hello, My events contain strings such as: notification that user "mydomain\bob" has notification that user "fred" has  notification that user "01\ralph2" has  I'm trying to write a conditional EXTRACT in props.conf, so that the a new field 'domain' is assgined the domain name (i.e. mydomain, 01) where specified, else is assigned NULL and new field 'user' is assigned the user name (i.e. bob, fred, ralph2). This works well enough when there is a domain and a user, but oviously not when there isn't a domain: EXTRACT-domain_user = notification\sthat\suser\s\"(?<domain>[\w\d]+)\\(?<user>[\w\d]+)\"\shas I'd be grateful for some assistance.      
Is it possible to assign a value to a different fields. I am trying to combine two different events but the same index. The other one has the field which I needed ip address while the other one doesn... See more...
Is it possible to assign a value to a different fields. I am trying to combine two different events but the same index. The other one has the field which I needed ip address while the other one doesn't have it in the raw logs. Is it possible to assign/pass the value to the other?   date name description ip 1/15/2023 12:05 xxx this is test 1 192.x.x.x 1/15/2023 12:06 xxx this is test 2   1/15/2023 12:06 xxx this is test 1 192.x.x.x   I tried using eval and passing the data but it fails. Using fill null values and assigning the a fix value doesn't fix it. it should be based from the IP above or within that same date Thanks you in advance for any advice
Hi, We are using the intsights app for splunk cloud as the intsights app installed on splunk idm,we notice that when we try to create a inputs to get the alerts,we are not able to select the custom... See more...
Hi, We are using the intsights app for splunk cloud as the intsights app installed on splunk idm,we notice that when we try to create a inputs to get the alerts,we are not able to select the custom index created in the indexer. Why the all indexes which are present in splunk cloud not populating in the intsights app splunk idm ??  
A service was set up to measure datastore free space usage, overprovisioning, and read and write activity. The thresholds per entity work fine, and many entities are displayed. The aggregated thres... See more...
A service was set up to measure datastore free space usage, overprovisioning, and read and write activity. The thresholds per entity work fine, and many entities are displayed. The aggregated threshold for the KPIs, however, shows only the values of the top entity. According to the manual, the aggregated threshold should display the average of all entities. Is there a setting that I am not using correctly?
Hi! I try to accelerate only one dataset in datamodel with multiple datasets. How i can do it through datamodel.conf or in web ui?  In webui i cant choose acceleration in edit drilldown(
We set up a service measuring datastore free space usage, overprovisioning , read and write activity. Large amount of entities  are shown and per entity thresholds are working fine. However the a... See more...
We set up a service measuring datastore free space usage, overprovisioning , read and write activity. Large amount of entities  are shown and per entity thresholds are working fine. However the aggregated threshold for the KPIs show only the values of the top entity.  According to manual the aggregated threshold should present an average of all entities result. Is there a setting I am not using correct?
Hi Team, Need assistance in the installation of appagent on step 5 as attached on the screenshot. On points 2 & 3 we were stuck due to not being able to execute -javaagent command. Please help a... See more...
Hi Team, Need assistance in the installation of appagent on step 5 as attached on the screenshot. On points 2 & 3 we were stuck due to not being able to execute -javaagent command. Please help as we are stuck for a long time.
Hey people, I want to find out the total number of hours that elapsed from the last event raised.   This is what I was doing previously:     | stats latest(_time) as last_log_time | eval... See more...
Hey people, I want to find out the total number of hours that elapsed from the last event raised.   This is what I was doing previously:     | stats latest(_time) as last_log_time | eval timeElapsedSinceLastLog=tostring(now() - last_log_time) | fieldformat timeElapsedSinceLastLog = strftime(timeElapsedSinceLastLog, "%H:%M:%S") |fields timeElapsedSinceLastLog         this gives me   But it has been more than a week, since the last event raised   I am also happy, if I could get number of days elapsed with time(if days < 1) as well.
I need to extract ITSI app version from app.conf file To display the data on a dashoboard  I found a way sing the config parser but its not very clear
Hi,  I want to onboard unique data from sql server to splunk, i have db connect app and i configured everything.  We have more than 4 lak events in database and it is dynamic, We have three field... See more...
Hi,  I want to onboard unique data from sql server to splunk, i have db connect app and i configured everything.  We have more than 4 lak events in database and it is dynamic, We have three fields equipment number, contact number, and Company code. Equipment number will be added/updated to database once in a week. How can I onboard unique equipment number every time? 
Hello. We're trying to integrate our Golang application to splunk through APM by following this documentation  Is there any difference especially in terms of cost when sending the data directly to s... See more...
Hello. We're trying to integrate our Golang application to splunk through APM by following this documentation  Is there any difference especially in terms of cost when sending the data directly to splunk compared to using the splunk collector?
Hello, apologies if this was stated previously. I have multiple calls - each RequestID with a RequestReceive and ResponseTransmit. I am trying to find the difference between the two timestamps below.... See more...
Hello, apologies if this was stated previously. I have multiple calls - each RequestID with a RequestReceive and ResponseTransmit. I am trying to find the difference between the two timestamps below. The difference of ResponseTransmit timestamp and RequestReceive timestamp. Then put that into a stats command ordered by clientPathURI and then the difference between the timestamps. Any assistance is much appreciated!   { [-]    RequestID: b74fab20-9a7b-11ed-bd70-c503548afa99    clientPathURI: signup    level: Info    logEventType: ResponseTransmit    timestamp: 2023-01-22T12:43:57.547-05:00 }   { [-]    RequestID: b74fab20-9a7b-11ed-bd70-c503548afa99    clientPathURI: signup    }    level: Info    logEventType: RequestReceive    timestamp: 2023-01-22T12:43:57.496-05:00 }