All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need to color a Dashboard with 3 types of logs in Red, Blue, Yellow as per ERROR , INFO, WARNING   So far I managed to integrate below  CSS code in the panel (it colors all rows Blue): ... See more...
I need to color a Dashboard with 3 types of logs in Red, Blue, Yellow as per ERROR , INFO, WARNING   So far I managed to integrate below  CSS code in the panel (it colors all rows Blue): <html> <style type="text/css"> .table-colored .table-row, td{ color:blue; } </style> </html> Does anyone know how to add conditions based on CELL Values e.g. if Cell = INFO -> Blue ,if Cell = ERROR -> Red , if Cell = WARNING -> Yellow
Hello Splunker, I hope you all are doing well.   I have tried to deploy the Windows-TA Add-On over my environment [Search Head Cluster + Deployer] [3 Indexer Peer + Indexer Cluster Master] [Deploy... See more...
Hello Splunker, I hope you all are doing well.   I have tried to deploy the Windows-TA Add-On over my environment [Search Head Cluster + Deployer] [3 Indexer Peer + Indexer Cluster Master] [Deployment Server + Universal Forwarder].   I have used the Deployment server to push the inputs.conf to the designated universal forwarder which allocated on the domain controller server and enable the needed. then remove the wmi.conf and inputs.conf from the Windows TA-Add-On, and copy the rest to local folder and used the deployer to push the enhanced Windows TA to the search heads.   As per the below screen from the official doc the indexer is conditional:   Why should push the Add-on to the indexers even if there are an index time field extraction? As i am know the search head cluster will replicate all the knowledge bundle with the indexers so all the KOs will be replicated to the indexers and no need to push them, am i correct? Splunk Add-on for Microsoft Windows  Thanks in advance!!
[UPDATE] Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i re... See more...
[UPDATE] Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i recently observed that my indexer was not indexing logs received. The indexer is in a failure state because my partition $SPLUNK_DB reached the minFreeSpace allowed in server.conf. After further analysis it seems that one of the index _metrics on the partition is saturated with warm buckets (db_*) and taking all the space available. I however have configured all my indexes with the indexes.conf ($SPLUNK_HOME/etc/system/default/indexes.conf)      # index specific defaults maxTotalDataSizeMB = 5000 maxDataSize = 1000 maxMemMB = 5 maxGlobalRawDataSizeMB = 0 maxGlobalDataSizeMB = 0 rotatePeriodInSecs = 30 maxHotIdleSecs = 432000 maxHotSpanSecs = 7776000 maxHotBuckets = auto maxWarmDBCount = 300 frozenTimePeriodInSecs = 188697600 ... # there's more but i might not be able to disclose them or it might not be revelant [_metrics] coldPath = $SPLUNK_DB/_metrics/colddb homePath = $SPLUNK_DB/_metrics/db thawedPath = $SPLUNK_DB/_metrics/thaweddb frozenTimePeriodInSecs = 1209600     From what i understand with this conf applied the index should not exceed 5GB, and when reached the warm/hot buckets should be removed, but it seems that's it's not taken into account in my case.  The indexer work fine after purging the buckets and restarting it, but i don't get why the conf was not applied ? Is there something i didn't get here ? Is there a way to check the "characteristics" of my index once started ? -> Checked, the conf is correctly applied.   If you know anything on this subject please help me  thank you
L.s.,   At our company we have multiple heavy forwarders. Normaly they talk to the central license manager, but for migrtation reason whe have to get them talking to themself. So a forwarder lice... See more...
L.s.,   At our company we have multiple heavy forwarders. Normaly they talk to the central license manager, but for migrtation reason whe have to get them talking to themself. So a forwarder license is in order i think. looks like an easy process. I did below ./splunk edit licenser-groups Forwarder -is_active 1 this will set in /opt/splunk/etc/system/local/server.conf  the settings below: [license] active_group = Forwarder and [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder Have to set by myself master_uri = self restart the server and it looks like a go. When i do this on every heavy it will give the error in _internal Duplicate license hash: [FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD], also present on peer . And Duplicated license situation not fixed in time (72-hour grace period). Disabling peer.. Why?? Is it realy going to disable the forwarders when it uses the forwarder.license? What to do about it, where did i go wrong?   Thanks in advance   greetz  Jari
Hi All, We are on Splunk 9.2 version, and we want to have a custom dashboard as landing page whenever any user logs in, the custom dashboard should appear along with the apps column at the left side... See more...
Hi All, We are on Splunk 9.2 version, and we want to have a custom dashboard as landing page whenever any user logs in, the custom dashboard should appear along with the apps column at the left side. How to achieve it? Right now, this works only for me and that too it lands in the quick links tab. Also, I can have the dashboard as default under navigation menu but it will not show apps column which is on the left side. 
We search thru the logs of switches and there are some logs that are unconcerning if you just have a couple of them like 5 in an hour. But if you have more than 50 in an hour, there is something wron... See more...
We search thru the logs of switches and there are some logs that are unconcerning if you just have a couple of them like 5 in an hour. But if you have more than 50 in an hour, there is something wrong and we want to raise an alarm for that. The problem is, i cannot simply search back in the last hour and show me devices that have more than 50 results, because i would not catch the issue that existed 5h ago. So i am looking into timecharts that do a statistic every hour and then i want to filter out the "charts" that have less than x per slot. What i came up with is this (searching the last 24h): index=network mnemonic=MACFLAP_NOTIF | timechart span=1h usenull=f count by hostname where max in top5 But this does not work as i still get all the timechart slots where i have 0 or less than 50 logs logs. So imagine following data: switch01 08:00-09:00 0 logs switch01 09:00-10:00 8 logs switch01 10:00-11:00 54 logs switch01 11:00-12:00 61 logs switch01 12::00-13:00 42 logs switch02 08:00-09:00 6 logs switch02 09:00-10:00 8 logs switch02 10:00-11:00 33 logs switch02 11:00-12:00 29 logs switch02 12::00-13:00 65 logs So my ideal search would return to me following lines: Time Hostname Results 10:00-11:00 switch01 54 11:00-12:00 switch01 61 12::00-13:00 switch02 65   The time is not that important, im looking more for the results based on the amount of the result.  Any help is appreciated.
HI , I have a user let say USER1 , his account is getting locked everyday , I searched his username on splunk and events are coming from 2 indexes _internal,_audit . How do I check the reason of his ... See more...
HI , I have a user let say USER1 , his account is getting locked everyday , I searched his username on splunk and events are coming from 2 indexes _internal,_audit . How do I check the reason of his locked account.
Just curious to find out if anyone has ever integrated Splunk Cluster with ITSI. Seems to me that SC certainly qualifies as a service with many dependencies, quite a few of which are not covered by ... See more...
Just curious to find out if anyone has ever integrated Splunk Cluster with ITSI. Seems to me that SC certainly qualifies as a service with many dependencies, quite a few of which are not covered by the monitoring console. While comprehensive, it doesn't include many things which SC just assumes are working correctly, primary among these the networks on which everything relies. For instance, if you have flapping port on switch or a configuration problem with a switch or router, this could potentially cause many interesting issues with SC about which MC would have no clue. Anyone ever made any moves along these lines? Charles
Hello Splunkers!! I have a raw event but the fields server ip and server name are not present in this raw event. And I need to extract both these fields in Splunk during index time. Both the fields ... See more...
Hello Splunkers!! I have a raw event but the fields server ip and server name are not present in this raw event. And I need to extract both these fields in Splunk during index time. Both the fields having static values. What attribute should I use in props and transform so that I can get both these files? Servername="mobiwick" ServerIP ="10.30.xx.56.78"   Sample raw data : <?xml version="1.0" encoding="utf-8"?><StaLogMessage original_root="ToLogMessage"><MessageId>6cad0986-d4b2-45e2-b5b1-e6a1af3c6d40</MessageId><MessageTimeStamp>2024-11-24T07:00:00.1115119Z</MessageTimeStamp><SenderFmInstanceName>TOP/Top</SenderFmInstanceName><ReceiverFmInstanceName>BPI/Bpi</ReceiverFmInstanceName><StatisticalElement><StatisticalSubject><MainSubjectId>NICKER</MainSubjectId><SubjectId>Prodtion</SubjectId><SubjectType>PLAN</SubjectType></StatisticalSubject><StatisticalItem><StatisticalId>8</StatisticalId><Period><TimePeriodEnd>2024-11-24T07:00:00Z</TimePeriodEnd><TimePeriodStart>2024-11-24T06:00:00Z</TimePeriodStart></Period><Value>0</Value></StatisticalItem></StatisticalElement></SogMessage>
Hello, I need help regarding an add-on which I built. This add-on was build using the Splunk Add-on Builder and it passed all the tests and can be installed on Splunk Enterprise and also on a single... See more...
Hello, I need help regarding an add-on which I built. This add-on was build using the Splunk Add-on Builder and it passed all the tests and can be installed on Splunk Enterprise and also on a single instance on Splunk Cloud. However, when it is installed on a cluster it does not work properly. The add-on when installed is supposed to create some CSVs files and store those in the add. However, when it is installed on a cluster splunk environment, it suddenly will not create the CSVs file and just do not download the files it was supposed to download. Any help or advise is welcome please. This is the add-on below. https://classic.splunkbase.splunk.com/app/7002/#/overview
Hello, I have the following query to search Proofpoint logs.  index=ppoint_prod host=*host1* | eval time=strftime(_time, "%m-%d-%y %T") | rex "env_from\s+value=(?<sender>\S+)" | rex "env_rcpt\s+r... See more...
Hello, I have the following query to search Proofpoint logs.  index=ppoint_prod host=*host1* | eval time=strftime(_time, "%m-%d-%y %T") | rex "env_from\s+value=(?<sender>\S+)" | rex "env_rcpt\s+r=\d+\s+value=(?<receiver>\S+)" | stats first(time) as Date first(ip) as ConnectingIP first(reverse) as ReverseLookup last(action) last(msgs) as MessagesSent count(receiver) as NumberOfMessageRecipients first(size) as MessageSize1 first(attachments) as NumberOfAttachments values(sender) as Sender values(receiver) as Recipients first(subject) as Subject by s | where Sender!="" It provides what I need on a per message level. How would I modify this to get list of ConnectingIP and ReverseLookup values per Sender. If possible it would be nice to also get number of messages per sender, but it is not absolutely neccessarry. I understand I will need to drop from the query everything  that is message specific like Subject, NumberOfAttachments etc. I am looking to get something like this:   sender1@domain.com ConnectingIP_1 ReverseLookup_1   ConnectingIP_2 ReverseLookup_2 sender2@domain.com ConnectingIP_3 ReverseLookup_3                                                   
Hi Folks I've been using mcollect to collect metrics from the events in my indexes and I thought if I set up an alert with the mcollect part in the search, it would automatically collect the metrics... See more...
Hi Folks I've been using mcollect to collect metrics from the events in my indexes and I thought if I set up an alert with the mcollect part in the search, it would automatically collect the metrics every X minutes but that doesn't seem to be working, the metrics are only collected when I run the search manually.   Any suggestions to how I can make mcollect just automatically collect the metrics I am looking for ?   Thanks
probably an easy one, i have two events as follows   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3 meaning in some events field2 exists, in some it doesnt, when it does i want ... See more...
probably an easy one, i have two events as follows   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3 meaning in some events field2 exists, in some it doesnt, when it does i want the value and when it doesnt i want it to be blank and all records have mynextfield3 and i always want that as field3 i want rex these lines and end up with field1               field2              field3 thisisfield1    thisisfield2   mynextfield3 thisisfield1                              mynextfield3  
Hello,   I want to create Input: HEC on the indexers => Indexer Cluster.   Create inputs.conf under /opt/splunk/etc/master-apps/_cluster/http_input/local: [http] disabled=0 enableSSL=0 [ht... See more...
Hello,   I want to create Input: HEC on the indexers => Indexer Cluster.   Create inputs.conf under /opt/splunk/etc/master-apps/_cluster/http_input/local: [http] disabled=0 enableSSL=0 [http://hec-input] disabled=0 enableSSL=0 #useACK=true index=HEC source=HEC_Source sourcetype=_json token=2f5c143f-b777-4777-b2cc-ea45a4288677 Push these configuration to the peer-app (Indexers).   But we go to the Data inputs => HTTP Event Collector  at indexer Side we still found it as below:    
I have read other forums and posts, but it did not help with my issue.  I am getting the typical Error while creating deployable apps: Error coping src'" /opt/splunk/etc/shcluster/my_app" to staging... See more...
I have read other forums and posts, but it did not help with my issue.  I am getting the typical Error while creating deployable apps: Error coping src'" /opt/splunk/etc/shcluster/my_app" to staging area=" /opt/splunk/var/run/splunk/deply.#######.tmp/apps/my_app." 180 errors    They are all python files (.py) this is for every app that has python files as well. I have checked the permissions and Splunk owns the files. I am so confused as to what is happening. I understand this is a permission issue, but Splunk owns the files (rambling) with permissions set to rw on all python files on the deployer.  Also, splunk user owns all files under /opt/splunk    Any help greatly appreciated! 
Hello Team, I have forwarded syslogs to Splunk Enterprise, I am trying to find a way to create props.conf and transforms.conf such a way that Splunk ingests all the messages which matches the keywor... See more...
Hello Team, I have forwarded syslogs to Splunk Enterprise, I am trying to find a way to create props.conf and transforms.conf such a way that Splunk ingests all the messages which matches the keywords that I have defined in a regex in transforms.conf and drop all the non matching messages however I am not able to do the same. Is there a way to do that or does transforms and props.conf only work to drop the messages which are defined in the regex as currently if I try to that Splunk is dropping only the keywords that I defined and ingesting everything else. I am new to splunk so requesting some inputs for the same. Thanks in advance!!
I'm trying to regex the field that has "REPLY" CommonEndpointLoggingAspect {requestId=94f2a697-3c0d-4835-b96a-42be3d2426e2, serviceName=getCart} - REPLY 
curl command :  curl -k -u  admin:Password -X POST http://127.0.0.1:8000/en-US/services/authorization/tokens?output_mode=json --data name=admin  --data audience=Users --data-urlencode expires_on=+3... See more...
curl command :  curl -k -u  admin:Password -X POST http://127.0.0.1:8000/en-US/services/authorization/tokens?output_mode=json --data name=admin  --data audience=Users --data-urlencode expires_on=+30d   But I am able to login via UI and create an access token.   If I try to do the same using curl command, I am getting the below response. Note: The response has been trimmed.     <div class="error-message"> <h1 data-role="error-title">Oops.</h1> <p data-role="error-message">Page not found! Click <a href="/" data-role="return-to-splunk-home">here</a> to return to Splunk homepage.</p> </div> </div> </div> <div class="message-wrapper"> <div class="message-container fixed-width" data-role="more-results"><a href="/en-US/app/search/search?q=index%3D_internal%20host%3D%22f6xffpvw93.corp.com%2A%22%20source%3D%2Aweb_service.log%20log_level%3DERROR%20requestid%3D6740cfffb611125b5e0" target="_blank">View more information about your request (request ID = 6740cfffb611125b5e0) in Search</a></div> <div class="message-container fixed-width" data-role="crashes"></div> <div class="message-container fixed-width" data-role="refferer"></div> <div class="message-container fixed-width" data-role="debug"></div> <div class="message-container fixed-width" data-role="byline"> <p class="byline">.</p> </div> </div> </body>
Need help to extract a field that comes after a certain word in a event.  I am looking to extract a field called "sn_grp" with the value of "M2 Infra Ops". So for every event that has sn_grp:  i w... See more...
Need help to extract a field that comes after a certain word in a event.  I am looking to extract a field called "sn_grp" with the value of "M2 Infra Ops". So for every event that has sn_grp:  i would like to extract the string that follows of "M2 Infra Ops". This string value will be the same name for every event. Below is an example data set i am using to write the regex to  \"sn_grp:M2 Infra Ops\"},{\"context\":\"CONTEXTLESS\",\"key\":\"Correspondence Routing Engine\
i need to run a script to check if a list of linux servers have splunk installed and the process name. any idea what the process name is or the installed directory? and if its forwarding to splunk co... See more...
i need to run a script to check if a list of linux servers have splunk installed and the process name. any idea what the process name is or the installed directory? and if its forwarding to splunk console?