All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Looking for spl query to get the index wise log consumption for each months splitup for last 6 months
When checking the URL categorization for a URL, it appears that the URL has been classified under two categories, for example, Business/Economy and File Storage/Sharing. However, we can only see one ... See more...
When checking the URL categorization for a URL, it appears that the URL has been classified under two categories, for example, Business/Economy and File Storage/Sharing. However, we can only see one category in the Splunk field (field name: filter_category). Is this something to do with the data collection in Splunk? Any details is appreciated. Check the current WebPulse categorization for any URL: https://sitereview.bluecoat.com/#/ 
I want chart as follow. I could show count each count value (cannot Calc field) (index=interface_count devicename IN ($select_device$) INTinfo1=Gi0/1 OR Gi0/2 data_field_name=Rx_counter) OR (inde... See more...
I want chart as follow. I could show count each count value (cannot Calc field) (index=interface_count devicename IN ($select_device$) INTinfo1=Gi0/1 OR Gi0/2 data_field_name=Rx_counter) OR (index=interface_count devicename IN ($select_device2$) description IN ($select_device$) data_field_name=Rx_counter) timechart span=5m eval(round(max(eval(Rx/1E5)),1)) as Rx_count by INTinfo1 _time Device_A Gi0/1 (a) Device_A Gi0/2 (b) Device_B Gi0/8 (c) Calc A+B-C 10:00 100 200 50 250 10:05 100 300 80 320 10:10 150 250 100 300    
our servers are in germany but splunk time is 2hr ahead  why is that? like  the event creation is on 5:02 am german time but in splunk it is showing 3:02am . any solutions
Hi Team, We have P1 Splunk alerts generated based on event ID: 12320 triggered from the following servers: scwdxxxxx0009 scwdxxxxx0008 scwpxxxxx0002 scwpxxxxx0001 Recently, we identified that... See more...
Hi Team, We have P1 Splunk alerts generated based on event ID: 12320 triggered from the following servers: scwdxxxxx0009 scwdxxxxx0008 scwpxxxxx0002 scwpxxxxx0001 Recently, we identified that we have a 24-hour suppression time for the alert, which led to a critical incident. To address this issue, the user has requested a reduction in the suppression time for the alert. The goal is to eliminate suppression unless the previous triggered alert is still open. If there are no open P1 tickets for event ID: 12320, there should not be any suppression of the generation of new tickets. Current Alert Configuration We have one alert in Splunk, and we are using the following query: Splunk query: index=winevent sourcetype="WinEvent:*" ((host="scwpxxxxx0001*" OR host="scwdxxxxx0008*" OR host="scwdxxxxx0009*" OR host="scwpxxxxx0002*") AND (EventCode=12320)) | eval assignment_group = "ABC IT - Computing Services" | eval host=lower(mvindex(split(host,"."),0)) | eval correlation_id=strftime(_time,"%Y-%m-%d %H:%M:%S").":".host | eval short_description=case((host="scwpxxxxx0001" OR host="scwdxxxxx0008"),"Microsoft AAD Proxy Connector - Prod not able to connect due to network issues.",(host="scwdxxxxx0009" OR host="scwpxxxxx0002"),"Microsoft AAD Proxy Connector - Dev not able to connect due to network issues.", 1=1, 0 ) | eval category="Application", subcategory="Repair/Fix", contact_type="Event", state=4, ci=host, customer="no573", impact=1, urgency=1, description="Event Code ".EventCode." encountered on host ".host." at ".strftime(_time,"%m/%d/%Y %H:%M:%S %Z")." SourceName:".SourceName." Log Name: ".LogName." TaskCategory:".TaskCategory." Message=".Message." Ticket generated on SNOW at ".strftime(now(),"%m/%d/%Y %H:%M:%S %Z") | table host, short_description, assignment_group, impact, urgency, category, subcategory, description, ci, correlation_id Alert Type Scheduled Schedule Run on cron schedule: */30 * * * * (every 30 minutes) Time Range Last 4 hours Expiration 24 hours Throttle Enabled Suppress results containing field value: host, EventCode Suppress triggering for: 24 hours Trigger Actions ServiceNow Incident Integration How we can suppress the alert as per the requirement. Please help us here. Thank you.
Hi, I have seen a steady increase in perfmon events or data in past 30 days. The number of hosts has been about same and overall production activity is the same. There was one host added during the 3... See more...
Hi, I have seen a steady increase in perfmon events or data in past 30 days. The number of hosts has been about same and overall production activity is the same. There was one host added during the 30 day time frame. I thought that host may have been the cause of the incrase. But, that new host is not even in the top 10 most active hosts.  The amount of overall perfmon data in proportion to the wineventlog data is increasing.  Please see the attached chart. Perfmon is represented with  brown bar, wineventlog is the green bar.  I'm asking for any ideas that would help me in identifying the cause of this change. Thank you, in advance for any help.    
Hi Team, I have generated dynamic URLs using the lookup and add it in the field value of the table. Now I need to make those dynamic URLs as a hyperlink so that we don't want to manually copy and pa... See more...
Hi Team, I have generated dynamic URLs using the lookup and add it in the field value of the table. Now I need to make those dynamic URLs as a hyperlink so that we don't want to manually copy and paste the URL in the browser every time.  I modified the source code as below, but it is working. Please assist on this. Thank you. "visualizations": {         "viz_abc123": {             "type": "splunk.table",             "options": {                 "count": 5000,                 "dataOverlayMode": "none",                 "drilldown": {                     "condition": {                         "field": "URL",                         "link": "$row.URL|n$"                     }                 },                 "backgroundColor": "#FAF9F6",                 "tableFormat": {                     "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByBackgroundColor)",                     "headerBackgroundColor": "> backgroundColor | setColorChannel(tableHeaderBackgroundColorConfig)",                     "rowColors": "> rowBackgroundColors | maxContrast(tableRowColorMaxContrast)",                     "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)"                 },                 "showInternalFields": false,                 "columnFormat": {                     "Duration(Secs)": {                         "data": "> table | seriesByName(\"Duration(Secs)\") | formatByType(Duration_Secs_ColumnFormatEditorConfig)",                         "rowColors": "> table | seriesByName(\"Duration(Secs)\") | rangeValue(Duration_Secs_RowColorsEditorConfig)"                     },                     "Duration(Mins)": {                         "data": "> table | seriesByName(\"Duration(Mins)\") | formatByType(Duration_Mins_ColumnFormatEditorConfig)",                         "rowColors": "> table | seriesByName(\"Duration(Mins)\") | rangeValue(Duration_Mins_RowColorsEditorConfig)"                     }                 }             },
Hello All, I am using | jirarest to fetch tickets from JIRA search results to Splunk. In JIRA I have around 300 tickets, but when I try to fetched in Splunk only 50 are returned. I tried to add ma... See more...
Hello All, I am using | jirarest to fetch tickets from JIRA search results to Splunk. In JIRA I have around 300 tickets, but when I try to fetched in Splunk only 50 are returned. I tried to add maxResults=1000, but I got 100 tickets. I tried to search about it and found in JIRA cloud if we have more than 100 items to return, we have to iterate through them in batches using startAt. But, the challenge is I am unable to find any way of running the iteration since I only get 50 tickets and not more on which I could run the iteration. Thus, I need your guidance on how to build a solution or workaround in Splunk to fetch all tickets. Thank you  Taruchit  
With polkit versions 0.120 and below, the version number was structured with a major/minor format always using the major version of 0. It appears that Splunk was using that dot between them to decode... See more...
With polkit versions 0.120 and below, the version number was structured with a major/minor format always using the major version of 0. It appears that Splunk was using that dot between them to decode the version number in its create-polkit-rules option to detect whether the older PKLA file or the newer JS version would be supported. Starting in polkit version 121, the maintainers of polkit have dropped the "0." major number and started using the minor version as the major version. Because of this, Splunk does not currently seem to be able to deploy its own polkit rules. This affects both RHEL 9 and Ubuntu 24.04 so far in my testing. Has anyone else run into this issue or have another workaround for it? Thanks!   root@dev2404-1:~# pkcheck --version pkcheck version 124 root@dev2404-1:~# apt-cache policy polkitd polkitd: Installed: 124-2ubuntu1 Candidate: 124-2ubuntu1 Version table: *** 124-2ubuntu1 500 500 http://archive.ubuntu.com/ubuntu noble/main amd64 Packages 100 /var/lib/dpkg/status root@dev2404-1:~# /opt/splunk/bin/splunk version Splunk 9.2.1 (build 78803f08aabb) root@dev2404-1:~# /opt/splunk/bin/splunk enable boot-start -user splunk -systemd-managed 1 -create-polkit-rules 1 " ": unable to parse Polkit major version: '.' separator not found. ^C root@dev2404-1:~#     https://github.com/polkit-org/polkit/tags
We apparently have the StreamWeaver integration in place, but we are not sure how it was implemented as the folks who did it are no longer around. How is it done usually?  Is it a REST API integra... See more...
We apparently have the StreamWeaver integration in place, but we are not sure how it was implemented as the folks who did it are no longer around. How is it done usually?  Is it a REST API integration? as I see at Connect: Splunk Enterprise 
We have this stood up and working...sort of.  Splunk Admins can configure alerts to add the "ServiceNow Incident Integration" action, and we can create Incidents in Splunk. The problem is, we have a... See more...
We have this stood up and working...sort of.  Splunk Admins can configure alerts to add the "ServiceNow Incident Integration" action, and we can create Incidents in Splunk. The problem is, we have a lot of development teams that create/maintain their own alerts in Splunk.  When they go to add this action, they're not able to select the account to use when configuring the action...because they don't have read permission to the account.  Even if an Admin goes in and configures the action, it won't work at run-time, because the alert runs under the owner's permissions...which can't read the credentials to use to call ServiceNow. Has anyone else ran into this issue?  How can this be setup to allow non-Admins to maintain alerts?
Hi SMEs, while checking the log from one of the log source i could see logs are not ending properly and getting clubbed all together. Putting the snap below and seeking your best advice to fix it   ... See more...
Hi SMEs, while checking the log from one of the log source i could see logs are not ending properly and getting clubbed all together. Putting the snap below and seeking your best advice to fix it    
I have three lookup files and I am trying to find out which one has a zero count. Below is the query I am using.   | inputlookup file_intel | inputlookup append=true ip_intel | inputlookup appe... See more...
I have three lookup files and I am trying to find out which one has a zero count. Below is the query I am using.   | inputlookup file_intel | inputlookup append=true ip_intel | inputlookup append=true http_intel | search threat_key=*risklist_hrly* | stats count by threat_key I want to know which threat_key has a zero count for threat_key=*risklist_hrly*. I have tried fillnull, its not working.   I can only see the one that has count. I want to get the one that has zero count.      
I have two sources that I'd like to combine/join or search on one based on the other. Source 1 - has two fields  name & date Source 2  - has several fields including name & date, field1, fields2, f... See more...
I have two sources that I'd like to combine/join or search on one based on the other. Source 1 - has two fields  name & date Source 2  - has several fields including name & date, field1, fields2, field3, etc.   I'd like to get the most recent date for a specific name from source 1, and show only the events in source 2 with that name & date
Hello splunkers! Has anyone else experienced slow performance with Splunk Enterprise Security? For me, when I open the "Content Management" in  "Configure" and let's say try to filter to see enabl... See more...
Hello splunkers! Has anyone else experienced slow performance with Splunk Enterprise Security? For me, when I open the "Content Management" in  "Configure" and let's say try to filter to see enabled correlation searches, it might take up to 5 minutes to load just 5 or 6 correlation searches. However, if I try to perform a search in search and reporting (Within Enterprise Security) the searches will run pretty much fast, returning hunderds of thousands of events. Another case where I might experience huge lags is when: creating a new investigation, updating the status of the notable, deleting investigation, opening Incident review settings, adding new note in investigation. If anyone had similar experience could someone please share how to improve the performance in Enterprise Security app? Some notes to give more info about my case: - The health circle is green.  - The deployment is all-in-one (Splunk Enterprise, ES, and all the apps and add-ons, everything is running on ubuntu server 20.04 virtual machine with 42 GB RAM, 200 GB hard disk (thin provisioned), 32 vCPU - My Splunk deployment has around 4-5 sources from which it receives the logs, average load of data is around 500-700 MB/day Thanks for taking your time reading  and replying to my post
Hello! Dark mode still does not work in Splunk Enterprise 9.2.1 when an emoji is in one of the visualizations, like a single for example. Here is run anywhere dashboard.  Just set it do dark mode a... See more...
Hello! Dark mode still does not work in Splunk Enterprise 9.2.1 when an emoji is in one of the visualizations, like a single for example. Here is run anywhere dashboard.  Just set it do dark mode and it stops working.  Remove the pizza and it works again.  If you are in dark mode already and add the emoji then after initial save it will work, but after refreshing it reverts to light.  If you don't like pizza then add an emoji of choice.     <dashboard version="1.1" theme="light"> <label>pizza dark test</label> <row> <panel> <single> <search> <query>| makeresults | eval emoji="ciao " | table emoji</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">value</option> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051", "0x0877a6", "0xf8be34", "0xf1813f", "0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">0</option> <option name="useThousandSeparators">1</option> </single> </panel> </row> </dashboard>       Thanks! Andrew
https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues https://docs.splunk.com/Documentation/Splunk/9.1.4/ReleaseNotes/Fixedissues One  customer reported a very interesting i... See more...
https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues https://docs.splunk.com/Documentation/Splunk/9.1.4/ReleaseNotes/Fixedissues One  customer reported a very interesting issue with graceful splunk restart. Event missing during a graceful restart/rolling restart(splunk stop gracefully finished). useACK=true is an option but that ideally must be applied if splunk stop timed-out. This has been an issue for so many years. This is important where config changes are pushed frequently, thus triggering frequent indexer/HF/IF restart. The issue is fixed by 9.1.4/9.2.1   TcpInputProcessor not able to drain splunktcpin queue during graceful shutdown   How to detect if it's applicable for your deployment? Check splunkd.log for  WARN TcpInputProc - Could not process data received from network. Aborting due to shutdown Also from metrics.log see https://community.splunk.com/t5/Knowledge-Management/During-indexer-restart-indexer-cluster-rolling-restart/m-p/683763#M9962
I tried to configure SSL/TSL connection between Forwarder and Indexer.  On forwarder /opt/splunkforwarder/etc/system/local/output.conf:     [tcpout] defaultGroup = default-autolb-group [tcpout... See more...
I tried to configure SSL/TSL connection between Forwarder and Indexer.  On forwarder /opt/splunkforwarder/etc/system/local/output.conf:     [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] disabled = false server = my.domain.com:9998 disabled = 0 clientCert = /opt/splunk/etc/auth/mycerts/client.pem useClientSSLCompression = true [tcpout-server://my.domain.com:9998]     Certificate  has been created by Certbot and prepared according to the instructions.  Works well for Splunk Web and I believe it works here too. On indexer /opt/splunk/etc/system/local/inputs.conf     [splunktcp-ssl:9998] disabled=0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/test_full.pem   test_full.pem - prepared certificate from Certbot. If I use forwarder without certificates everything works fine so there is no connection errors. Output of splunk list forward-server   Configured but inactive forwards: my.domain.com:9998     From  /var/log/splunk/splunkd.log I can see the following error:   05-22-2024 11:51:03.823 +0000 ERROR TcpOutputFd [29087 TcpOutEloop] - Read error. Connection reset by peer 05-22-2024 11:51:03.823 +0000 WARN AutoLoadBalancedConnectionStrategy [29087 TcpOutEloop] - Applying quarantine to ip=99.99.99.99 port=9998 connid=2 _numberOfFailures=2   Could you please help me debug the problem?  
my search as below, the two <my search command for list user rating list> search command is the same, how to reduce this search command. I want to use once time <my search command for list user rat... See more...
my search as below, the two <my search command for list user rating list> search command is the same, how to reduce this search command. I want to use once time <my search command for list user rating list>, mean share the same search results for queries. The transaction sellerId and buyerId could look up user of rating list to get the rating data. <my search command for transaction records> | dedup orderId | table orderId, sellerId, buyerId | join type=left sellerId [ search <my search command for list user rating list> | table sellerId, sellerRating] | search orderId!="" | table orderId, sellerId, buyerId, sellerRating | join type=left buyerId [ search <my search command for list user rating list> | table buyerId, buyerRating] | search orderId!="" | table orderId, sellerId, buyerId, sellerRating,buyerRating transaction records maybe like as below orderId sellerId buyerId 123 John Marry 456 Alex Josh   user rating (all user) user rating Josh 10 Alex -2 Lisa 1 Marry 3 John 0 Tim 0   excepted result orderId sellerId buyerId sellerRating buyerRating 123 John Marry 0 3 456 Alex Josh -2 10
Hi, I tried to add a piece of code to change the color of values based on certain condition, but it is not reflecting the change in my dashboard. Can you please check & advise what is going wrong? ... See more...
Hi, I tried to add a piece of code to change the color of values based on certain condition, but it is not reflecting the change in my dashboard. Can you please check & advise what is going wrong? New code added - <single id="CurrentUtilisation"> <search> <query> <![CDATA[ index=usage_index_summary | fields Index as sourceIndex, totalRawSizeGB | where Index="$single_index_name$" | stats latest(totalRawSizeGB) as CurrentSize by Index | join left=L right=R where L.Index=R.extracted_Index [ search index=index_configured_limits_summary | stats latest(maxGlobalDataSizeGB) as MaxSizeGB by extracted_Index ] | rename L.CurrentSize as CurrentSizeGB, R.MaxSizeGB as MaxSizeGB, L.Index as Index | eval unit_label = if(CurrentSizeGB < 1, "MB", "GB") | eval CurrentSizeGB = if(CurrentSizeGB < 1, CurrentSizeGB*1024, CurrentSizeGB) | eval CurrentSizeDisplay = round(CurrentSizeGB) . if(unit_label == "MB", "MB", "GB") | eval CurrentSizeDisplay = if(CurrentSizeGB == 0, "None", CurrentSizeDisplay) | eval range=if(CurrentSizeGB > MaxSizeGB, "over", "under") | table CurrentSizeDisplay, range ]]> </query> </search> <option name="colorBy">value</option> <option name="drilldown">none</option> <option name="rangeColors">["red", "white"]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="underLabel">Current Utilisation</option> <option name="useColors">1</option> </single> What I want - If Currentsize > Maxsize then the value should display in Red else White. The query on being run independently is showing correct results for the range & current size maxsize values but the color does not change in the dashboard. I have looked up this in the community & tried using the same logic mentioned in this successful solution but to no avail.   Reference used - https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommunity.splunk.com%2Ft5%2FDashboards-Visualizations%2FHow-can-I-change-Splunk-Dashboard-single-value-field-color-of%2Ftd-p%2F596833&data=05%7C02%7Csaleha.shaikh%40here.com%7C8e67306234504904e1c008dc7a4ac122%7C6d4034cd72254f72b85391feaea64919%7C0%7C0%7C638519708691080704%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=iJit6osuY09q25VX8pWiUcuylKtrNczG4H%2BhCfSgEbo%3D&reserved=0