All Topics

Top

All Topics

DropDown 1 - 3 static options. DropDown 2 needs to display the products of those servers ServerA ServerB ServerC DropDown2 using Query : I need to bring the server A or B or C in my token?  Quer... See more...
DropDown 1 - 3 static options. DropDown 2 needs to display the products of those servers ServerA ServerB ServerC DropDown2 using Query : I need to bring the server A or B or C in my token?  Query; |inputlookup abc.csv.gz |Hostname= "ServerA"      <input type="dropdown" token="field1" searchWhenChanged="false"> <label>License Server</label> <choice value="a">A</choice> <choice value="b">B</choice> <choice value="c">C</choice> <default>a</default> <change> <condition value="a"> <unset token="c-details"></unset> <unset token="b-details"></unset> <set token="a-details"></set> </condition> <condition value="b"> <unset token="a-details"></unset> <unset token="c-details"></unset> <set token="b-details"></set> </condition> <condition value="c"> <unset token="a-details"></unset> <unset token="b-details"></unset> <set token="c-details"></set> </condition> </change> </input>
      Hello, I have a multi-site cluster at version 9.0.1, with several Indexers, SHs, and HF/UFs. The Monitoring Console is configured on the Cluster Manager, and "Forwarder Monitoring" ... See more...
      Hello, I have a multi-site cluster at version 9.0.1, with several Indexers, SHs, and HF/UFs. The Monitoring Console is configured on the Cluster Manager, and "Forwarder Monitoring" is enabled, which allows me to see the status of the forwarders. What is missing is the possibility to select HF in the Resource Usage section of the Monitoring Console. They are not available. How can I get them to appear in Resource Usage in the Monitoring Console?   Thank you, Andrea
Hi all, We are currently facing an issue with our Splunk SOAR installation Every time that we open the playbook editor, it shows the errors in the screenshot below and all the dropdown and search f... See more...
Hi all, We are currently facing an issue with our Splunk SOAR installation Every time that we open the playbook editor, it shows the errors in the screenshot below and all the dropdown and search fields stop working (eg: we're unable to choose apps or datatypes for the input) We have also tried to reinstall it (both v6.1.1 and v6.2.0) The service is running on a VM with Red Hat Enterprise Linux release 8.9 Do you have any suggestions on how we can solve this problem? Thanks for your help Best regards  
Hello I would like to make a query in which i can see how long my equipment has been inactive and when it was inactive preferably in a timechart. I would like to define inactive in 2 ways. One is wh... See more...
Hello I would like to make a query in which i can see how long my equipment has been inactive and when it was inactive preferably in a timechart. I would like to define inactive in 2 ways. One is when x y and z have the same value +/-50 for 10 seconds or more In these events 1000=950/1050 for the sake of inactivity The second way is when there has been no new event from a piece of equipment for more than 10 seconds Any help would be very much appriciated. Below are some sample events and how long the equipment is active/inactive 12:00:10 x=1000 y=500 z=300 equipmentID=1 12:00:15 x=1000 y=500 z=300 equipmentID=1 12:00:20 x=1025 y=525 z=275 equipmentID=1 12:00:25 x=1000 y=500 z=300 equipmentID=1 (20 seconds of inactivity) 12:00:30 x=1600 y=850 z=60 equipmentID=1 12:00:35 x=1600 y=850 z=60 equipmentID=1 (15 seconds of activity) 12:03:00 x=1650 y=950 z=300 equipmentID=1 (135 seconds of inactivity) 12:03:05 x=1850 y=500 z=650 equipmentID=1 12:03:10 x=2500 y=950 z=800 equipmentID=1 12:03:15 x=2500 y=950 z=400 equipmentID=1 12:03:20 x=2500 y=950 z=150 equipmentID=1 (15 seconds of activity)
Hi all, I'm trying to extract a part of a field. The field named Computer and is like MySrv.MyDomain.MySubDom1.com MySubDom1 can exist or not. I would like to extract everything after MySrv. I tri... See more...
Hi all, I'm trying to extract a part of a field. The field named Computer and is like MySrv.MyDomain.MySubDom1.com MySubDom1 can exist or not. I would like to extract everything after MySrv. I tried with  index=MyIndex host=MySrv | rex field=_raw "(?<domaine_test>(\.\w+))" The result create a new field Domain_test but it stores only the first part "MyDomain" and not the rest of the field. How can I do this ? For exemple : Computer = "MySrv.MyDomain.MySubDom1.com" Result : Domain_test = "MyDomain.MySubDom1.com"
Hello all, I am confused on which machines I am intended to have my inputs.conf files configured.  1. I am currently operating under the assumption that inputs.conf files are primarily for the inde... See more...
Hello all, I am confused on which machines I am intended to have my inputs.conf files configured.  1. I am currently operating under the assumption that inputs.conf files are primarily for the indexer is this correct? 2. If I update an inputs.conf file do I need to push the updated file through my deployment server so that the inputs.conf files tied to the applications on the S.U.F reflect in the same changes made on the manager. a. I have raw xml data populating and I wish to fix this so that it is easier to read... Currently there is no source type in my inputs.conf. I believe applying an appropriate source type in the inputs.conf is the first step to fixing this problem.  b. There are multiple stanzas in inputs.conf. Do I need to apply a source type to each of the stanzas that have to do with sending xml logs or is their a way to apply this change on global scale? Z. Will someone please explain the difference between source and source type I have read the documentation on the manner and am still uncertain in my understanding.   Thanks for the help in advance!  
Hello I have a Radial Gauge visualisation and wanna specific a filed as the range maximung value, with the following command: <option name="charting.chart.rangeValues">[0,"1000",2000,5000]</optio... See more...
Hello I have a Radial Gauge visualisation and wanna specific a filed as the range maximung value, with the following command: <option name="charting.chart.rangeValues">[0,"1000",2000,5000]</option>  Is this possible? thanks for helping hitchmontana
Hi Team, Can we install data collector node on heavy forwarder? what are the care need to be taken on this ? What are the other services can be installed on the heavy forwarder ?   
Hi I have a query that need to compare count of PF field for two log file: on splunk I have two query that create this table, the issue is need to "PF" that equal in query1 and query2 show in same ... See more...
Hi I have a query that need to compare count of PF field for two log file: on splunk I have two query that create this table, the issue is need to "PF" that equal in query1 and query2 show in same row: current result: hostname1 PF1 count1 hostname2 PF2 count2 host1 red 50 host2 yellow 90 host1 green 40 host2 green 90 host1 purple 50 host2 red 90 expected result: hostname1 PF1 count1 hostname2 PF2 count2 host1 red 50 host2 red 90 host1 green 40 host2 green 90 host1 purple 50 host2 - - host1 - - host2 yellow 90 here is the query: index="myindex" "mymodule*:" AND "P[" AND "F[" source="/tmp/*/log.srv23.*.bz2" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "P(?<PF>\[\d+\]\[\d+\])" | stats count as _PF by PF,servername | stats list(_PF) as count list(PF) as PF by servername | appendcols [search index="myindex" "mymodule*:" AND "P[" AND "F[" source="/tmp/*/log.srv24.*.bz2" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "P(?<PF2>\[\d+\]\[\d+\])" | stats count as _PF2 by PF2,servername | stats list(_PF2) as count list(PF2) as PF2 by servername ] Any idea? Thanks
I created an index in "manager-app\_cluster\local\indexes.conf" on the cluster master in a cluster environment. The index is now displayed in "peer_apps\_cluster\local\indexes.conf" of... See more...
I created an index in "manager-app\_cluster\local\indexes.conf" on the cluster master in a cluster environment. The index is now displayed in "peer_apps\_cluster\local\indexes.conf" of the cluster indexer, but I cannot select that index from other servers (deploy or sh's WebUI). Please tell me how to create an index in a cluster environment.
Hi. I'm working on splunk operations team. version: Enterprise 7.3.4, centos7 Environment : 3 SH, 10 IDX cluster. Search Factor : 3 Replication Factor : 3   Why is different tsidx file's size ... See more...
Hi. I'm working on splunk operations team. version: Enterprise 7.3.4, centos7 Environment : 3 SH, 10 IDX cluster. Search Factor : 3 Replication Factor : 3   Why is different tsidx file's size in same index? My situation : Tsidx file's  size in one specific indexer is 2~3x larger than size in the rest indexer. So, this indexer storage sometimes goes to above 97% and indexing stops. After splunk instance restart, for a few days storage remains at 86~87%. (the rest indexers always remain at 86%) I checked rawdata file's size. But, size is only affected by event count. Not all indexes are included. 10 of 100 indexes are included. I can't find cause. Help me. If you need additional information, please let me know. Thank you.  
Hello,  looks like Microsoft Graph Security add all tags to all event type so its not correctly CIM mapped , any one filtered events based on alerts data  and map to correct data models?  for examp... See more...
Hello,  looks like Microsoft Graph Security add all tags to all event type so its not correctly CIM mapped , any one filtered events based on alerts data  and map to correct data models?  for example i need to sort out based on events and map to below data models Malware - IDS - Endpoint - Alert Thx 
Hi Guys, I am very new to Splunk Cloud and how things work here. Our current setup is: 1. UF(Linux) -> Heavy Forwarder(On Prem) -> Indexer/Search Head(Splunk Cloud) 2. Created a new index quifa... See more...
Hi Guys, I am very new to Splunk Cloud and how things work here. Our current setup is: 1. UF(Linux) -> Heavy Forwarder(On Prem) -> Indexer/Search Head(Splunk Cloud) 2. Created a new index quifapp on Splunk Cloud. 2. UF is already connected to HF (just dummy connection and verified that its sending _internal logs to Splunk Cloud) as can be seen from the logs: 02-20-2024 11:22:11.394 +1100 INFO AutoLoadBalancedConnectionStrategy [566068 TcpOutEloop] - Found currently active indexer. Connected to idx=192.168.23.32:9997:0, reuse=1   3. New request is to forward logs from location /app/quif/quif.log to Splunk Cloud. 4. I have put the required config under below location /opt/splunkforwarder/etc/apps/quif/local and it has two files: #cat inputs.conf [monitor:///app/quif/quif.log*] sourcetype=quif_requests disabled=0 index=quifapp # cat props.conf [quif_requests] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom disabled = false pulldown_type = true EXTRACT-AgentId = ^\w+:\s+(?P<AgentId>\w+) ####   4. I restarted SplunkForwarder but can't see any logs coming in the Cloud.   Is there any additional config that's required at any level. How can I troubleshoot?
I'm not sure why rex is properly matching the beginning of the value I am looking for (NameofTeam), but it also matches and includes everything after it. As I understand it, my search should stop mat... See more...
I'm not sure why rex is properly matching the beginning of the value I am looking for (NameofTeam), but it also matches and includes everything after it. As I understand it, my search should stop matching when it reaches "}, after matching the team name. What am I doing wrong?   index=test | rex field=_raw "Key\": \"Owner\", \"ValueString\": \"(?<Team>.+)\"},"   Sample Data: {"Key": "OtherKey", "ValueString": "OtherValue"}, {"Key": "Owner", "ValueString": "NameofTeam"}, {"Key": "OtherKey", "ValueString": "OtherValue"}, Expected Output: NameofTeam Actual Output: NameofTeam"}, {"Key": "OtherKey", "ValueString": "OtherValue"},  
Hello all I hope this is the right forum, I am having some trouble with the Barracuda Email Security Gateway Add-on and field extraction. We have a Splunk Cloud subscription and I am using an Ubunt... See more...
Hello all I hope this is the right forum, I am having some trouble with the Barracuda Email Security Gateway Add-on and field extraction. We have a Splunk Cloud subscription and I am using an Ubuntu server with rsyslog and a universal forwarder to send syslog data to our Splunk Cloud instance. I have the Barracuda Email Security Gateway Add-on installed in our Splunk Cloud. I have the data from our Barracuda Email Gateway system going into a folder called /var/log/syslog_barracuda.log. I have my inputs.conf file configured as follows: [monitor:///var/log/syslog_barracuda.log] disabled = 0 sourcetype = barracuda In our Splunk Cloud, I see the events, and they have the "barracuda" sourcetype as expected. The problem is, no field extraction is applied to these events. Is there something I am missing? The Add-on only shows to add the lines to the inputs.conf file. Any help would be appreciated, I am new to Splunk and trying to wrap my head around everything.
|mstats avg(os.mem.utilized) as Memory_Used where index=metricsidx host=host1 OR host=host2 span=1d |table Memory_Used |foreach avg* [|eval <<FIELD>>=round('<<FIELD>>',2)] We have a dashboard where... See more...
|mstats avg(os.mem.utilized) as Memory_Used where index=metricsidx host=host1 OR host=host2 span=1d |table Memory_Used |foreach avg* [|eval <<FIELD>>=round('<<FIELD>>',2)] We have a dashboard where we are calculating memory for different hosts.   For example, is the above query. So, we don't want to repeat this query for all panels. instead, i want to create a base search with all host like below query. But it's not working. Can anyone help on this: Base Search: |mstats avg(os.mem.utilized) as Memory_Used where index=metricsidx host IN(host1, host2, host3,host4, host5, host6, host7, host8, host9, host10) by host span=1d |foreach avg* [|eval <<FIELD>>=round('<<FIELD>>',2)] Sub search: |table Memory_Used
Query: |tstats count where index=xxx host=host1 sourcetype=newsourcetype by PREFIX(type:) _time |rename type: as Types |timechart span=1d values(count) by Types |eval Total=Model1 +Model2+ Model3+... See more...
Query: |tstats count where index=xxx host=host1 sourcetype=newsourcetype by PREFIX(type:) _time |rename type: as Types |timechart span=1d values(count) by Types |eval Total=Model1 +Model2+ Model3+ Model4 |fillnull value=0 OUTPUT: _time Model1 Model2 Model3 Model4 Total 2021-04-12 2 0 1 4 0 2021-04-13 0 4 0 0 0 2021-04-14 8 2 10 4 24 2021-04-15 3 0 2 8 0 2021-04-16 1 4 2 2 9   EXPECTED OUTPUT: _time Model1 Model2 Model3 Model4 Total 2021-04-12 2 0 1 4 7 2021-04-13 0 4 0 0 4 2021-04-14 8 2 10 4 24 2021-04-15 3 0 2 8 13 2021-04-16 1 4 2 2 9
I have requirement to calculate total time a user has been connected to system, for that I have logs as below which stats that when user connected and disconnected, how can we calculate his total tim... See more...
I have requirement to calculate total time a user has been connected to system, for that I have logs as below which stats that when user connected and disconnected, how can we calculate his total time connected by userId.   { "timestamp": "2024-02-19T20:15:01,544Z", "message": "Connection created.", "sessionId": "383c4345", "operation": "Connect", "spanId": "578045e3cfdbb598", "traceId": "578045e3cfdbb598", "userId": "test12" } { "timestamp": "2024-02-19T21:19:01,544Z", "message": "Connection disconnected.", "sessionId": "383c4345", "operation": "Disconnect", "spanId": "578045e3cfdbb598", "traceId": "578045e3cfdbb598", "userId": "test12" } { "timestamp": "2024-02-19T22:24:01,544Z", "message": "Connection created.", "sessionId": "383cxfe45", "operation": "Connect", "spanId": "fdcaf3d14e630be3", "traceId": "fdcaf3d14e630be3", "userId": "test12" } { "timestamp": "2024-02-19T23:17:01,544Z", "message": "Connection disconnected.", "sessionId": "383cxfe45", "operation": "Disconnect", "spanId": "fdcaf3d14e630be3", "traceId": "fdcaf3d14e630be3", "userId": "test12" } { "timestamp": "2024-02-19T19:45:01,544Z", "message": "Connection created.", "sessionId": "7490647a", "operation": "Connect", "spanId": "c6cc6afa1a770fc9", "traceId": "c6cc6afa1a770fc9", "userId": "test_234" } { "timestamp": "2024-02-19T20:19:01,544Z", "message": "Connection disconnected.", "sessionId": "7490647a", "operation": "Disconnect", "spanId": "c6cc6afa1a770fc9", "traceId": "c6cc6afa1a770fc9", "userId": "test_234" } { "timestamp": "2024-02-19T21:34:01,544Z", "message": "Connection created.", "sessionId": "f9cbd2f", "operation": "Connect", "spanId": "fdcaf3d14e630be3", "traceId": "fdcaf3d14e630be3", "userId": "test_234" } { "timestamp": "2024-02-19T23:20:01,544Z", "message": "Connection disconnected.", "sessionId": "f9cbd2f", "operation": "Disconnect", "spanId": "fdcaf3d14e630be3", "traceId": "fdcaf3d14e630be3", "userId": "test_234" }     Have given example for 2 userId test12 and test_234, has connected /disconnected 2 times.  And I am expecting results as below. User Total Time Connected  test12 1 hour 59 min test_234 2 hours 20 min      
I am trying to create a db input keeping in mind the data that we have in db. It has fields like PKEY, STARTTIME, ENDTIME etc. If I use PKEY or STARTTIME in Rising column, I am bound to miss some row... See more...
I am trying to create a db input keeping in mind the data that we have in db. It has fields like PKEY, STARTTIME, ENDTIME etc. If I use PKEY or STARTTIME in Rising column, I am bound to miss some rows. Also, both PKEY and STARTTIME fields are not unique. So, I am trying to use CONCATE(PKEY,STARTTIME). SELECT BTACHTASK, ASOF, PKEY, STARTTIME, ENDTIME, CONCAT(PKEY, STARTTIME) AS combination FROM CORE_MCA.SPLUNK_BATCH_STATES_VW WHERE CONCAT(PKEY,STARTTIME) > ? ORDER BY CONCAT(PKEY,STARTTIME) ASC I am using Rising input and checkpoint should have been combination but I am not getting any results in Rising Column. I am getting the error java.sql.SQLException: Missing IN or OUT parameter at index:: 1. What am I doing wrong here? Also, sometimes the normal query also gives this error but after refreshing and selecting the connection once again I get the required data.
I have a timechart that shows the last 30d and with the timechart I also have a trendline showing the sma7.  The problem is that on the timechart, the trendline doesn't show anything for days 1-6, wh... See more...
I have a timechart that shows the last 30d and with the timechart I also have a trendline showing the sma7.  The problem is that on the timechart, the trendline doesn't show anything for days 1-6, which I understand is because there is no data from the previous days for the sma7 to calculate. I thought that the solution could be to change my search for the last 37d and then only timechart days 7-37d (if that makes sense) but can't seem to figure out how to implement that or if that is even a possible solution. Existing search   index=palo eventtype=user_logon earliest=-37d@d | bin span=1d _time | timechart count(eval(like(user_auth, "%-Compliant"))) as compliant count as total | eval compliant=round(((compliant/total)*100),2) | trendline sma7(compliant) as compliant7sma | eval compliant7sma=round(compliant7sma,2) | table _time, compliant, compliant7sma