All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I would like to leave the "header.JMSDestination"="topic/testTopic/Durable-Non-Subscription/20" the last two fields extract us as a field by creating an eval function so that i... See more...
Hello, I would like to leave the "header.JMSDestination"="topic/testTopic/Durable-Non-Subscription/20" the last two fields extract us as a field by creating an eval function so that it looks like the following: "Durable-Non-Subscription.20" Next step I would like to separate this variable again so that it looks like "Non-Subscription.20" . Finally, I need two variables: "Durable-Non-Subscription.20" "Non-Subscription.20" Thank you very much
Hi guys , is there any simple CSS code that can edit Single panel's Trend Interpretation size and font size? I just need it smaller.
We are moving several admin folks to be power users. During the transition we might have permission issue. Where can we see them?
Hi , I have logs like this a) 04:55:21.8630 Info {"message":"16 A Process completed, notification displayed" b)04:55:21.8630 Info {"message":"Process completed" Here i need to search for ... See more...
Hi , I have logs like this a) 04:55:21.8630 Info {"message":"16 A Process completed, notification displayed" b)04:55:21.8630 Info {"message":"Process completed" Here i need to search for exactly "Process Completed" string. It should give exact match result.
I have indexed few sample logs in to the Splunk.. 2020-02-15T10:41:54.305Z servername.com sev="INFO" msg_details="Audit success" pol_name="policy_name" Splunk by default extract the fields se... See more...
I have indexed few sample logs in to the Splunk.. 2020-02-15T10:41:54.305Z servername.com sev="INFO" msg_details="Audit success" pol_name="policy_name" Splunk by default extract the fields sev, msg_details, pol_name and extract the values appropriately. Everything loos good. For me, i need to rename the field as severity instead of sev, Description instead of msg_details and Policies instead of pol_name. I have updated the props.conf [sourcetype] FIELDALIAS-severity = sev AS Severity FIELDALIAS-msg_details = msg_details AS Description FIELDALIAS-pol_name = pol_name AS Policies Fields are extracting properly. When i run the search on Verbose Mode, i can see both sev and Severity, which is quiet annoying for the Analysts Is it normal or do i have to write a EXTRACT function with appropriate REGEX in order to show only the Severity field NOT sev.
Hi, why is my UF on Windows executing various splunk-* tools without them beeing configured in any input? Every few minutes I see them in sysmon: splunk-powershell.exe splunk-regmon.exe spl... See more...
Hi, why is my UF on Windows executing various splunk-* tools without them beeing configured in any input? Every few minutes I see them in sysmon: splunk-powershell.exe splunk-regmon.exe splunk-powershell.exe splunk-netmon.exe splunk-admon.exe splunk-MonitorNoHandle.exe splunk-winprintmon.exe I do not see them in any inputs.conf. thx afx
Hi everyone, We have logs that contain field named "var" with num data type, the value of this field changes through time, so we have two more fields in the log, startDate (when the var acquired ... See more...
Hi everyone, We have logs that contain field named "var" with num data type, the value of this field changes through time, so we have two more fields in the log, startDate (when the var acquired the value written in that log) and endDate (when the var value will expire). Both dates are written in the %Y%m%d format. And of course, some of the endDate values are in the future. I want to create a timechart where the x-axis starts from the first startDate value of the var field and ends at the latest endDate which is in the future. So my question is, after many days of research, is it possible to create a time range that is to be used in the timechart? Something like: |eval customTimeRange = [earliest(startDate) .. latest(endDate)] |chart values(var) over customTimeRange by source Or something similar? Maybe it is worth mentioning, that I want it as a line chart, where the line is continuous and the last value will extend on the chart till the endDate value. Any help would be appreciated! Thanks in advance.
Hi Team, As per below output I want to know the exact count of disconnected status of each server_name by ignoring the duplicate counts. As we are using script from splunk to ingest the server ... See more...
Hi Team, As per below output I want to know the exact count of disconnected status of each server_name by ignoring the duplicate counts. As we are using script from splunk to ingest the server status every 5 min, once slunk triggered an alert with server is disconnected, we are manually starting and it will take 15-20 min, in between 3/4 times script will execute and ingest the server status into splunk . in this if count the total count if disconnected state by using stats count it will include the duplicate count as well, but we need to identify the exact count. Server_Name Status server1.example.com disconnected server1.example.com disconnected server1.example.com connected server1.example.com connected server1.example.com connected server1.example.com disconnected server1.example.com disconnected server1.example.com connected server1.example.com connected server1.example.com connected server2.example.com disconnected server2.example.com disconnected server2.example.com disconnected server2.example.com disconnected server2.example.com disconnected server2.example.com connected server2.example.com connected server2.example.com disconnected server2.example.com disconnected server2.example.com connected server3.example.com connected server3.example.com disconnected server3.example.com disconnected server3.example.com disconnected server3.example.com connected server3.example.com disconnected server3.example.com disconnected server3.example.com disconnected server3.example.com connected server3.example.com connected server3.example.com disconnected server3.example.com disconnected server3.example.com disconnected server3.example.com connected as per above result we are expecting disconnected count of each server is server1.example.com - disconnected - count=2 server2.example.com - disconnected - count=2 server3.example.com - disconnected - count=3 any logic , please suggest ...
Hi, I have recently started looking at .conf files and configuring them to log specific site data. After I made my changes and everything was getting logged I have come across an odd issue wh... See more...
Hi, I have recently started looking at .conf files and configuring them to log specific site data. After I made my changes and everything was getting logged I have come across an odd issue where by my index is showing latest and earliest data is 20+ days ago. When I run searches and queries against the index there is data still coming in in real time. Anyone come across this and how would i go about trouble shooting this? Example of my index: IndexA Earliest 23 days ago Latest 23 days ago $SPLUNK_DB/IndexA/db (Disclaimer I'm relatively new to this Splunk world please bear with me)
Hey Splunkers, Our security team, executed Micro Focus Vulnerability on 1 of our Splunk Application, We are stuck at resolving one of those vulnerabilities. Please have a look in below content: ... See more...
Hey Splunkers, Our security team, executed Micro Focus Vulnerability on 1 of our Splunk Application, We are stuck at resolving one of those vulnerabilities. Please have a look in below content: Request: GET /en-US/splunkd/_raw/services/dmc-conf/settings/settings? output_mode=json&=1580502716111 HTTP/1.1 Host: splunkhost.com User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:30.0) Gecko/20100101 Firefox/30.0 Accept: text/javascript, text/html, application/xml, text/xml, / Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate X-Requested-With: XMLHttpRequest Referer: https://splunkhost.com/en-US/app/launcher/home Pragma: no-cache Cookie: session_id_443=2d27370ac5f16e9354644d57ce1c121f9d040047; splunkweb_uid=26C23B88-147C-4748-9114-30F3DA995665; splunkd_443=QBb1wG72NPI89_yHW24v6Znjs^NKV70YtHeEUnJXKhFeTcfUoF^IRd982b1S6JUR Gd4nTrC3g5TU_wxK4TlbljBml0SMmU6hebQlBvIKhXoNhUWlce4KBYA27aCa7NQ7mvo70LGO; splunkweb_csrf_token_443=17486043298053400227; login=true;CustomCookie=WebInspect156349ZX667F65AD929D4167B5A374A3F6AA6A51Y8 6EE Connection: keep-alive X-WIPP: AscVersion=X.X.X.X X-Scan-Memo: SID="AA07BC3BA2A5D3254DB3183B066094A4"; SessionType="StartMacro"; CrawlType="None"; X-RequestManager-Memo: sid="1429"; smi="0"; Category="EventMacro.StartMacro"; MacroName="APP+360+Test.webmacro"; X-Request-Memo: ID="e95a1883-d78b-4fba-bcad-d72f4a691c71"; tid="295"; Response: HTTP/1.1 404 Not Found Date: Fri, 31 Jan 2020 20:31:56 GMT Expires: Thu, 26 Oct 1978 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Content-Type: application/json; charset=UTF-8 X-Content-Type-Options: nosniff Content-Length: 50 Vary: Cookie Connection: Keep-Alive Set-Cookie: splunkd_443=QBb1wG72NPI89_yHW24v6Znjs^NKV70YtHeEUnJXKhFeTcfUoF^IRd982b1S6JUR Gd4nTrC3g5TU_wxK4TlbljBml0SMmU6hebQlBvIKhXoNhUWlce4KBYA27aCa7NQ7mvo70LGO; Path=/; Secure; HttpOnly; Max-Age=3600; Expires=Fri, 31 Jan 2020 21:31:56 GMT Set-Cookie: splunkweb_csrf_token_443=17486043298053400227; Path=/; Secure; Max-Age=157680000; Expires=Wed, 29 Jan 2025 20:31:56 GMT X-Frame-Options: SAMEORIGIN Server: Splunkd ...TRUNCATED... We are using Splunk Enterprise 7.2
Hi, I am getting below error for '_introspection' index- The percentage of small buckets (75%) created over the last hour is high and exceeded the red thresholds (50%) for index=_introspection,... See more...
Hi, I am getting below error for '_introspection' index- The percentage of small buckets (75%) created over the last hour is high and exceeded the red thresholds (50%) for index=_introspection, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=4, small buckets=3 I want to know why this is happening with _introspection index? I can understand if I increased hot bucket count then error may get resolved but I would like to know why it is happening? Thanks,
Hi, status count ERROR 9346 PROCESSED 148066 PROCESSING 149571 I want to do the subtraction for above example. Total = ERROR+ PROCESSED-PROCESSING Total=... See more...
Hi, status count ERROR 9346 PROCESSED 148066 PROCESSING 149571 I want to do the subtraction for above example. Total = ERROR+ PROCESSED-PROCESSING Total= 9346+148066-149571 Total = ?
What does this error message mean? 02-10-2020 07:52:50.896 -0500 ERROR MongoModificationsTracker - Could not dump KVStore collections. Connection failed.
Hi, We tried charting.fieldColors and charting.seriesColors.But both are no working. Charting.seriesColor is changing, but its taking only the first color to entire bars. Please suggest way to ach... See more...
Hi, We tried charting.fieldColors and charting.seriesColors.But both are no working. Charting.seriesColor is changing, but its taking only the first color to entire bars. Please suggest way to achieve it. Regards, GJ
I have a lookup table that shows all the next-level managers of a particular manager as UserManager UserManagerx1 UserManagerx2 UserManagerx3... UserManagerx20. The top-level manager has about 20 nes... See more...
I have a lookup table that shows all the next-level managers of a particular manager as UserManager UserManagerx1 UserManagerx2 UserManagerx3... UserManagerx20. The top-level manager has about 20 nested managers, but all others have far fewer. My indexed data has only data on the direct user and manager. I have a dashboard select input to select the top-level manager that the user is interested in. I'm trying to create a search that will then display data of all direct report users as well as all levels below that manager via the lookup table. This isn't working as expected. Thanks for any suggestions. index="disk_index" [| inputlookup sdmanager.csv | table UserManager] | search blah blah blah | append [ search index="disk_index" UserManager="$top_manager$" ] | append [search index="disk_index" UserManager="$top_manager$" | lookup sdmanager.csv UserManager OUTPUT UserManagerx1 as UserManager ] | append [ search index="disk_index" UserManager="$top_manager$" | lookup sdmanager.csv UserManager OUTPUT UserManagerx2 as UserManager ] and so on until | append [ search index="disk_index" UserManager="$top_manager$" | lookup sdmanager.csv UserManager OUTPUT UserManagerx20 as UserManager ] | stats sum(UserUsage) as TotalUserUsage by UserManager DiskUser And then I've also tried like this... index="disk_index" [| inputlookup sdmanager.csv | table UserManager] | search blah blah blah | join type=inner [| search UserManager="LAST,FIRST" | stats sum(UserUsage) as TotalUserUsage by UserManager configuration process DiskUser ] | join type=outer [ | inputlookup sdmanager.csv | search UserManager="LAST,FIRST" | fields UserMangerx1 | stats sum(UserUsage) as TotalUserUsage by UserManager configuration process DiskUser ] | table UserManager configuration process DiskUser TotalUserUsage My next idea is to try map , but I've never used it and it seems widely discouraged. Perhaps my lookup should be formatted differently? Or a foreach command? Thanks again
HI all, I am trying to deploy ITSI and was successful in doing so on a single search head. It got integrated with SAI and worked fine. But as soon as i tried the same on Search head cluster it wont... See more...
HI all, I am trying to deploy ITSI and was successful in doing so on a single search head. It got integrated with SAI and worked fine. But as soon as i tried the same on Search head cluster it wont integrate with SAI some how. any ideas, suggestions are welcome. Splunk ver- 7.3.3 ITSI ver 4.4.1 Tried integration twice or thrice now. the setting for Data Input for Splunk App for Infrastructure - Entity Migration is not available Getting this error :Current instance is running in SHC mode and is not able to add new inputs thanks in advance. regards, Kulwinder Singh
How can I find most delay transactions? Here is the log file like below, I want to find which transaction delay and sort them descending, show result in table and subtract time stamp and show in fr... See more...
How can I find most delay transactions? Here is the log file like below, I want to find which transaction delay and sort them descending, show result in table and subtract time stamp and show in front of transaction Here is the log: 16:30:53:002 start[C1]L[143]F[10] 16:30:54:002 start[C2]L[143]F[20] 16:30:55:002 start[C5]L[143]F[02] 16:30:56:002 start[C12]L[143]F[30] 16:30:57:002 start[C5]L[143]F[7] 16:30:58:002 end[C1]L[143]F[10] 16:30:59:002 start[C1]L[143]F[11] 16:30:60:002 end[C1]L[143]F[11] Expected output: Transaction Delay 16:30:53:002 start[C1]L[143]F[10] 5s 16:30:58:002 end[C1]L[143]F[10] 16:30:59:002 start[C1]L[143]F[10] 1s 16:30:60:002 end[C1]L[143]F[10] ... FYI: 1 sometimes we have start without end, or end without start. 2 “F” means footprints, sometimes “F” it might not be unique, so after first “start” we should expect “end”. Any recommendation? Thanks
What is the best and most efficient way to write alert for index with no events? I have the following index=_internal earliest=60m | where count=0 or | metadata type=sources index=... See more...
What is the best and most efficient way to write alert for index with no events? I have the following index=_internal earliest=60m | where count=0 or | metadata type=sources index=* | eval flatline=round((now()-recentTime)/60,0) Thank You
Good morning, Hope someone can help me out here. I am trying to get a list of IPs where hits are > 100, but I want to exclude an external list that is saved as an inputlookup file. index=serv... See more...
Good morning, Hope someone can help me out here. I am trying to get a list of IPs where hits are > 100, but I want to exclude an external list that is saved as an inputlookup file. index=server site=login | stats count AS Hits BY ip | search Hits > 100 NOT [| inputlookup savedfile | fields test_ip | rename test_ip AS ip] The problem I am facing here is that in both cases (so with removing the last line of the code "NOT [|..") I am getting the same number as with the line while I manually reviewed the result and a few IPs are in the input file as well as on the "base" query. Also the following did not provide the desired results: index=server site=login | stats count AS Hits BY ip | search Hits > 100 | search NOT [ | inputlookup savedfile | fields test_ip | rename test_ip AS ip ] Thanks for the feedback and thinking in advance,
Hello,I will post for the first time. Please tell me about the table to get from ServiceNow using addon. I want to import "sys_update_xml" via Addon,what should I do? "sys_update_xml"is not lis... See more...
Hello,I will post for the first time. Please tell me about the table to get from ServiceNow using addon. I want to import "sys_update_xml" via Addon,what should I do? "sys_update_xml"is not listed by default in inputs.conf. By the way,the following 3tables I want to import. sysevent sys_audit_delete sys_update_xml The following content will be placed in the pass. Do I have to get [snow]? pass: $SPLUNK_HOME/etc/apps/Splunk_TA_snow/local file: inputs.conf [snow] index = main timefield = sys_updated_on disabled = false interval = 60 start_by_shell = false id_field = sys_id [snow://sysevent] disabled = false timefield = sys_created_on table = sysevent duration = 60 account = since_when =2000-01-01 00:00:00 [snow://sys_audit_delete] disabled = false timefield = sys_updated_on table = sys_audit_delete duration = 60 account =   since_when = 2000-01-01 00:00:00