All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all, I am confused on which machines I am intended to have my inputs.conf files configured.  1. I am currently operating under the assumption that inputs.conf files are primarily for the inde... See more...
Hello all, I am confused on which machines I am intended to have my inputs.conf files configured.  1. I am currently operating under the assumption that inputs.conf files are primarily for the indexer is this correct? 2. If I update an inputs.conf file do I need to push the updated file through my deployment server so that the inputs.conf files tied to the applications on the S.U.F reflect in the same changes made on the manager. a. I have raw xml data populating and I wish to fix this so that it is easier to read... Currently there is no source type in my inputs.conf. I believe applying an appropriate source type in the inputs.conf is the first step to fixing this problem.  b. There are multiple stanzas in inputs.conf. Do I need to apply a source type to each of the stanzas that have to do with sending xml logs or is their a way to apply this change on global scale? Z. Will someone please explain the difference between source and source type I have read the documentation on the manner and am still uncertain in my understanding.   Thanks for the help in advance!  
Hello I have a Radial Gauge visualisation and wanna specific a filed as the range maximung value, with the following command: <option name="charting.chart.rangeValues">[0,"1000",2000,5000]</optio... See more...
Hello I have a Radial Gauge visualisation and wanna specific a filed as the range maximung value, with the following command: <option name="charting.chart.rangeValues">[0,"1000",2000,5000]</option>  Is this possible? thanks for helping hitchmontana
Hi Team, Can we install data collector node on heavy forwarder? what are the care need to be taken on this ? What are the other services can be installed on the heavy forwarder ?   
Hi I have a query that need to compare count of PF field for two log file: on splunk I have two query that create this table, the issue is need to "PF" that equal in query1 and query2 show in same ... See more...
Hi I have a query that need to compare count of PF field for two log file: on splunk I have two query that create this table, the issue is need to "PF" that equal in query1 and query2 show in same row: current result: hostname1 PF1 count1 hostname2 PF2 count2 host1 red 50 host2 yellow 90 host1 green 40 host2 green 90 host1 purple 50 host2 red 90 expected result: hostname1 PF1 count1 hostname2 PF2 count2 host1 red 50 host2 red 90 host1 green 40 host2 green 90 host1 purple 50 host2 - - host1 - - host2 yellow 90 here is the query: index="myindex" "mymodule*:" AND "P[" AND "F[" source="/tmp/*/log.srv23.*.bz2" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "P(?<PF>\[\d+\]\[\d+\])" | stats count as _PF by PF,servername | stats list(_PF) as count list(PF) as PF by servername | appendcols [search index="myindex" "mymodule*:" AND "P[" AND "F[" source="/tmp/*/log.srv24.*.bz2" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "P(?<PF2>\[\d+\]\[\d+\])" | stats count as _PF2 by PF2,servername | stats list(_PF2) as count list(PF2) as PF2 by servername ] Any idea? Thanks
I created an index in "manager-app\_cluster\local\indexes.conf" on the cluster master in a cluster environment. The index is now displayed in "peer_apps\_cluster\local\indexes.conf" of... See more...
I created an index in "manager-app\_cluster\local\indexes.conf" on the cluster master in a cluster environment. The index is now displayed in "peer_apps\_cluster\local\indexes.conf" of the cluster indexer, but I cannot select that index from other servers (deploy or sh's WebUI). Please tell me how to create an index in a cluster environment.
Hi. I'm working on splunk operations team. version: Enterprise 7.3.4, centos7 Environment : 3 SH, 10 IDX cluster. Search Factor : 3 Replication Factor : 3   Why is different tsidx file's size ... See more...
Hi. I'm working on splunk operations team. version: Enterprise 7.3.4, centos7 Environment : 3 SH, 10 IDX cluster. Search Factor : 3 Replication Factor : 3   Why is different tsidx file's size in same index? My situation : Tsidx file's  size in one specific indexer is 2~3x larger than size in the rest indexer. So, this indexer storage sometimes goes to above 97% and indexing stops. After splunk instance restart, for a few days storage remains at 86~87%. (the rest indexers always remain at 86%) I checked rawdata file's size. But, size is only affected by event count. Not all indexes are included. 10 of 100 indexes are included. I can't find cause. Help me. If you need additional information, please let me know. Thank you.  
Hello,  looks like Microsoft Graph Security add all tags to all event type so its not correctly CIM mapped , any one filtered events based on alerts data  and map to correct data models?  for examp... See more...
Hello,  looks like Microsoft Graph Security add all tags to all event type so its not correctly CIM mapped , any one filtered events based on alerts data  and map to correct data models?  for example i need to sort out based on events and map to below data models Malware - IDS - Endpoint - Alert Thx 
Hi Guys, I am very new to Splunk Cloud and how things work here. Our current setup is: 1. UF(Linux) -> Heavy Forwarder(On Prem) -> Indexer/Search Head(Splunk Cloud) 2. Created a new index quifa... See more...
Hi Guys, I am very new to Splunk Cloud and how things work here. Our current setup is: 1. UF(Linux) -> Heavy Forwarder(On Prem) -> Indexer/Search Head(Splunk Cloud) 2. Created a new index quifapp on Splunk Cloud. 2. UF is already connected to HF (just dummy connection and verified that its sending _internal logs to Splunk Cloud) as can be seen from the logs: 02-20-2024 11:22:11.394 +1100 INFO AutoLoadBalancedConnectionStrategy [566068 TcpOutEloop] - Found currently active indexer. Connected to idx=192.168.23.32:9997:0, reuse=1   3. New request is to forward logs from location /app/quif/quif.log to Splunk Cloud. 4. I have put the required config under below location /opt/splunkforwarder/etc/apps/quif/local and it has two files: #cat inputs.conf [monitor:///app/quif/quif.log*] sourcetype=quif_requests disabled=0 index=quifapp # cat props.conf [quif_requests] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom disabled = false pulldown_type = true EXTRACT-AgentId = ^\w+:\s+(?P<AgentId>\w+) ####   4. I restarted SplunkForwarder but can't see any logs coming in the Cloud.   Is there any additional config that's required at any level. How can I troubleshoot?
I'm not sure why rex is properly matching the beginning of the value I am looking for (NameofTeam), but it also matches and includes everything after it. As I understand it, my search should stop mat... See more...
I'm not sure why rex is properly matching the beginning of the value I am looking for (NameofTeam), but it also matches and includes everything after it. As I understand it, my search should stop matching when it reaches "}, after matching the team name. What am I doing wrong?   index=test | rex field=_raw "Key\": \"Owner\", \"ValueString\": \"(?<Team>.+)\"},"   Sample Data: {"Key": "OtherKey", "ValueString": "OtherValue"}, {"Key": "Owner", "ValueString": "NameofTeam"}, {"Key": "OtherKey", "ValueString": "OtherValue"}, Expected Output: NameofTeam Actual Output: NameofTeam"}, {"Key": "OtherKey", "ValueString": "OtherValue"},  
Hello all I hope this is the right forum, I am having some trouble with the Barracuda Email Security Gateway Add-on and field extraction. We have a Splunk Cloud subscription and I am using an Ubunt... See more...
Hello all I hope this is the right forum, I am having some trouble with the Barracuda Email Security Gateway Add-on and field extraction. We have a Splunk Cloud subscription and I am using an Ubuntu server with rsyslog and a universal forwarder to send syslog data to our Splunk Cloud instance. I have the Barracuda Email Security Gateway Add-on installed in our Splunk Cloud. I have the data from our Barracuda Email Gateway system going into a folder called /var/log/syslog_barracuda.log. I have my inputs.conf file configured as follows: [monitor:///var/log/syslog_barracuda.log] disabled = 0 sourcetype = barracuda In our Splunk Cloud, I see the events, and they have the "barracuda" sourcetype as expected. The problem is, no field extraction is applied to these events. Is there something I am missing? The Add-on only shows to add the lines to the inputs.conf file. Any help would be appreciated, I am new to Splunk and trying to wrap my head around everything.
|mstats avg(os.mem.utilized) as Memory_Used where index=metricsidx host=host1 OR host=host2 span=1d |table Memory_Used |foreach avg* [|eval <<FIELD>>=round('<<FIELD>>',2)] We have a dashboard where... See more...
|mstats avg(os.mem.utilized) as Memory_Used where index=metricsidx host=host1 OR host=host2 span=1d |table Memory_Used |foreach avg* [|eval <<FIELD>>=round('<<FIELD>>',2)] We have a dashboard where we are calculating memory for different hosts.   For example, is the above query. So, we don't want to repeat this query for all panels. instead, i want to create a base search with all host like below query. But it's not working. Can anyone help on this: Base Search: |mstats avg(os.mem.utilized) as Memory_Used where index=metricsidx host IN(host1, host2, host3,host4, host5, host6, host7, host8, host9, host10) by host span=1d |foreach avg* [|eval <<FIELD>>=round('<<FIELD>>',2)] Sub search: |table Memory_Used
Query: |tstats count where index=xxx host=host1 sourcetype=newsourcetype by PREFIX(type:) _time |rename type: as Types |timechart span=1d values(count) by Types |eval Total=Model1 +Model2+ Model3+... See more...
Query: |tstats count where index=xxx host=host1 sourcetype=newsourcetype by PREFIX(type:) _time |rename type: as Types |timechart span=1d values(count) by Types |eval Total=Model1 +Model2+ Model3+ Model4 |fillnull value=0 OUTPUT: _time Model1 Model2 Model3 Model4 Total 2021-04-12 2 0 1 4 0 2021-04-13 0 4 0 0 0 2021-04-14 8 2 10 4 24 2021-04-15 3 0 2 8 0 2021-04-16 1 4 2 2 9   EXPECTED OUTPUT: _time Model1 Model2 Model3 Model4 Total 2021-04-12 2 0 1 4 7 2021-04-13 0 4 0 0 4 2021-04-14 8 2 10 4 24 2021-04-15 3 0 2 8 13 2021-04-16 1 4 2 2 9
I have requirement to calculate total time a user has been connected to system, for that I have logs as below which stats that when user connected and disconnected, how can we calculate his total tim... See more...
I have requirement to calculate total time a user has been connected to system, for that I have logs as below which stats that when user connected and disconnected, how can we calculate his total time connected by userId.   { "timestamp": "2024-02-19T20:15:01,544Z", "message": "Connection created.", "sessionId": "383c4345", "operation": "Connect", "spanId": "578045e3cfdbb598", "traceId": "578045e3cfdbb598", "userId": "test12" } { "timestamp": "2024-02-19T21:19:01,544Z", "message": "Connection disconnected.", "sessionId": "383c4345", "operation": "Disconnect", "spanId": "578045e3cfdbb598", "traceId": "578045e3cfdbb598", "userId": "test12" } { "timestamp": "2024-02-19T22:24:01,544Z", "message": "Connection created.", "sessionId": "383cxfe45", "operation": "Connect", "spanId": "fdcaf3d14e630be3", "traceId": "fdcaf3d14e630be3", "userId": "test12" } { "timestamp": "2024-02-19T23:17:01,544Z", "message": "Connection disconnected.", "sessionId": "383cxfe45", "operation": "Disconnect", "spanId": "fdcaf3d14e630be3", "traceId": "fdcaf3d14e630be3", "userId": "test12" } { "timestamp": "2024-02-19T19:45:01,544Z", "message": "Connection created.", "sessionId": "7490647a", "operation": "Connect", "spanId": "c6cc6afa1a770fc9", "traceId": "c6cc6afa1a770fc9", "userId": "test_234" } { "timestamp": "2024-02-19T20:19:01,544Z", "message": "Connection disconnected.", "sessionId": "7490647a", "operation": "Disconnect", "spanId": "c6cc6afa1a770fc9", "traceId": "c6cc6afa1a770fc9", "userId": "test_234" } { "timestamp": "2024-02-19T21:34:01,544Z", "message": "Connection created.", "sessionId": "f9cbd2f", "operation": "Connect", "spanId": "fdcaf3d14e630be3", "traceId": "fdcaf3d14e630be3", "userId": "test_234" } { "timestamp": "2024-02-19T23:20:01,544Z", "message": "Connection disconnected.", "sessionId": "f9cbd2f", "operation": "Disconnect", "spanId": "fdcaf3d14e630be3", "traceId": "fdcaf3d14e630be3", "userId": "test_234" }     Have given example for 2 userId test12 and test_234, has connected /disconnected 2 times.  And I am expecting results as below. User Total Time Connected  test12 1 hour 59 min test_234 2 hours 20 min      
I am trying to create a db input keeping in mind the data that we have in db. It has fields like PKEY, STARTTIME, ENDTIME etc. If I use PKEY or STARTTIME in Rising column, I am bound to miss some row... See more...
I am trying to create a db input keeping in mind the data that we have in db. It has fields like PKEY, STARTTIME, ENDTIME etc. If I use PKEY or STARTTIME in Rising column, I am bound to miss some rows. Also, both PKEY and STARTTIME fields are not unique. So, I am trying to use CONCATE(PKEY,STARTTIME). SELECT BTACHTASK, ASOF, PKEY, STARTTIME, ENDTIME, CONCAT(PKEY, STARTTIME) AS combination FROM CORE_MCA.SPLUNK_BATCH_STATES_VW WHERE CONCAT(PKEY,STARTTIME) > ? ORDER BY CONCAT(PKEY,STARTTIME) ASC I am using Rising input and checkpoint should have been combination but I am not getting any results in Rising Column. I am getting the error java.sql.SQLException: Missing IN or OUT parameter at index:: 1. What am I doing wrong here? Also, sometimes the normal query also gives this error but after refreshing and selecting the connection once again I get the required data.
I have a timechart that shows the last 30d and with the timechart I also have a trendline showing the sma7.  The problem is that on the timechart, the trendline doesn't show anything for days 1-6, wh... See more...
I have a timechart that shows the last 30d and with the timechart I also have a trendline showing the sma7.  The problem is that on the timechart, the trendline doesn't show anything for days 1-6, which I understand is because there is no data from the previous days for the sma7 to calculate. I thought that the solution could be to change my search for the last 37d and then only timechart days 7-37d (if that makes sense) but can't seem to figure out how to implement that or if that is even a possible solution. Existing search   index=palo eventtype=user_logon earliest=-37d@d | bin span=1d _time | timechart count(eval(like(user_auth, "%-Compliant"))) as compliant count as total | eval compliant=round(((compliant/total)*100),2) | trendline sma7(compliant) as compliant7sma | eval compliant7sma=round(compliant7sma,2) | table _time, compliant, compliant7sma      
I want to identify where the rate that an index's _indextime changes by a specific amount, with a tolerence that increases the faster the rate. For example: 1. Index A - It indexes once every 6... See more...
I want to identify where the rate that an index's _indextime changes by a specific amount, with a tolerence that increases the faster the rate. For example: 1. Index A - It indexes once every 6 hours and populates the past 6 hours of events. In this circumstance I would want to know if it hasn't indexed for 8 hours or more. The tolerance is therefore relatively small (around 30% extra). 2. Index B - It indexes every second, in this circumstance I may forgive it not indexing for a few seconds, but I'd definitely want to know if it hasn't indexed in 10 minutes. The tolerence is therefore relatively large.  I don't think _time is right to use, as that would retrospectively backfill the indexes and I'm thinking it'd give false results.  I feel that either the _internal index or tstats has the answer, but I've not yet come close.
Hi everyone! I need to capture an endpoint that is requested by the method PATCH. Has anyone found a way to do this? In the detection rules I could only find GET, POST, DELETE, PUT.
I am working on building an SRE dashboard. Similar to https://www.appdynamics.com/blog/product/software-reliability-metrics/. Help me how to build a month error budget burn chart? Thank you.
"I have an issue with creating a field named 'Path' which should be populated with 'YES' or 'NO' based on the following information: I have fields like 'Hostname', 'dev', and 'vulnerability'. I need... See more...
"I have an issue with creating a field named 'Path' which should be populated with 'YES' or 'NO' based on the following information: I have fields like 'Hostname', 'dev', and 'vulnerability'. I need to take the values in 'dev' and 'vulnerability' and check if there are other rows with the same 'hostname' and 'vulnerability'. If there is a match, I write 'NO' in the 'Path' field; otherwise, I write 'YES'." Hostname  dev vulnerabilita patch A B apache SI A B sql NO B 0 apache NO B 0 python NO C A apache SI
Hello I'm collecting cloudtrail logs by installing Splunk add on AWS in the Splunk heavy forwarder. The following logs are occurring in the aws:cloudtrail:log source type in the _internal index. "... See more...
Hello I'm collecting cloudtrail logs by installing Splunk add on AWS in the Splunk heavy forwarder. The following logs are occurring in the aws:cloudtrail:log source type in the _internal index. " ~ level=WARNING pid=3386853 tid=Thread-7090 logger=urllib3.connectionpool pos=connectionpool.py:_put_conn:308 | Connection pool is full, discarding connection: bucket.vpce-abc1234.s3.ap-northeast-2.vpce.amazonaws.com. Connection pool size: 10" Should Splunk add on AWS increase the Connection pool size? How can I increase the Connection pool size? Curiously, I would like to know the solution for this log. Thank you.