All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to view bitbucket files changed , owners who changed bitbucket files in Splunk. Can someone please share the steps to do that.
Hello Splunkers !   i want to write a command that shows a timeline of authentication activities as following: index=MyIndex eventtype=Authentication user=* action=* src=* | stats count(user) by _... See more...
Hello Splunkers !   i want to write a command that shows a timeline of authentication activities as following: index=MyIndex eventtype=Authentication user=* action=* src=* | stats count(user) by _time the output looks like this:     the thing is that the time is in seconds is shown is statistics below:     i want the the command to show count  for authentication attempts by minutes not seconds.       Thanks ^_^  
Hi! I currently have a csv file which shows the expected time my daily reports should be sent out. I also have a search which displays the time the report is actually sent and have created a field ... See more...
Hi! I currently have a csv file which shows the expected time my daily reports should be sent out. I also have a search which displays the time the report is actually sent and have created a field called "Delay" which shows the difference between the expected time and actual time. My issue is, if I wish to search events on a range e.g. for the past week and find their delay for each day: if i have a report that wasn't sent out on Monday as expected, but instead was delayed to Tuesday, the "Delay" value is only comparing to an expected time rather than an expected time and date, hence the delay is 0. i.e reports on the 2nd and 3rd of January were delayed till 4th of January. Yet as they were sent at a time before the expected time, the delay shows 0, rather than the correct value of over a day. Any ideas? Thanks in advance  
Hi, I have a query that gives a table of records satisfying certain condition. Have another query that gives the same result fields, but with a different search string. Now I want to find the ones ... See more...
Hi, I have a query that gives a table of records satisfying certain condition. Have another query that gives the same result fields, but with a different search string. Now I want to find the ones that are present in first result and not in the second one Query1 : index=mail sourcetype=testmail "Search condition 1" | table field1 field2 field3 | dedup field1 field2 field3 Query2 : index=mail sourcetype=testmail "Search condition 2" | table field1 field2 field3 | dedup field1 field2 field3 How do I find the one that is only present in Query 1 and then list it as a table with all 3 fields?
Hi Splunk Admins Just looking for some advice around setting the data segment size (ulimit -d) in Splunk, on a Linux  server (RHEL).  Older documentation (v7.3) recommended setting this value t... See more...
Hi Splunk Admins Just looking for some advice around setting the data segment size (ulimit -d) in Splunk, on a Linux  server (RHEL).  Older documentation (v7.3) recommended setting this value to basically be an unlimited size, with a Kibibyte value of 1073741824 or ~1TB.  https://docs.splunk.com/Documentation/Splunk/7.3.8/Installation/Systemrequirements#Considerations_regarding_system-wide_resource_limits_on_.2Anix_systems Data segment size ulimit -d 1073741824 I see the v8.x documentation has now changed the data segment size recommendation to be more a general guideline, with an 8GB example. https://docs.splunk.com/Documentation/Splunk/latest/Installation/Systemrequirements#Considerations_regarding_system-wide_resource_limits_on_.2Anix_systems Data segment size ulimit -d The maximum RAM you want Splunk Enterprise to allocate in kilobytes. For example, 8GB is 8000000.  It appears Splunk do not really have a strong opinion on a minimum size now either.  I think on RHEV Linux, the data segment size just defaults to unlimited anyway, or at least on our VM servers it does. I don't believe setting this value alone helps protect Splunk from excessive memory use either.  From what I can tell with googling about data segments, if it was indeed set to a value, then it does not even need to be set to an excessively large value.  Happy to admit I'm no expert though. Anyway, just wondering if anyone has some experience with setting this value in their environments, or even a view if this data segment size limit even really needs to be set at all - on Linux at least.
Hi, I have created  the below table using the query "index=main host="abcde" | rex field=_raw "(?ms)Label\s+Name\s:\s(?<App_Name>\w+\S+)" | rex field=_raw "(?ms)Sync\sState\s:\s(?<App_State>[\w\s]+\... See more...
Hi, I have created  the below table using the query "index=main host="abcde" | rex field=_raw "(?ms)Label\s+Name\s:\s(?<App_Name>\w+\S+)" | rex field=_raw "(?ms)Sync\sState\s:\s(?<App_State>[\w\s]+\w)\s+Number" | table App_Name,App_State" App_Name    App_State abc                    Stopped cde                    Running abc                    Running xyz                    Stopped the                    Running abc                   Partially running abc                   Stopped xyz                    Running the                    Running abc                   Running and so on. Here I want to create the table in the below format(the app_state should not repeat for a particular app_name but should be shown once per app_name): App_Name    App_State abc                    Running abc                    Partially running abc                    Stopped cde                    Running xyz                    Running xyz                    Stopped the                    Running I used the "dedup" command along with my above query "index=main host="abcde" | rex field=_raw "(?ms)Label\s+Name\s:\s(?<App_Name>\w+\S+)" | rex field=_raw "(?ms)Sync\sState\s:\s(?<App_State>[\w\s]+\w)\s+Number" | table App_Name,App_State | dedup App_Name" But I am getting this below output: App_Name    App_State abc                    Running cde                    Running xyz                    Running the                    Running Please help me create the query to get the output in the desired way. Thank you.
Hi  I need help in determining the browser that appear in our logs. I believe the simple way to use the app TA - UA parser or an external script but unfortunately i do not have enough access rights ... See more...
Hi  I need help in determining the browser that appear in our logs. I believe the simple way to use the app TA - UA parser or an external script but unfortunately i do not have enough access rights to use the tools.  SPL command -  index=aws sourcetype = * Website="*" | stats count(eval(match(User_Agent, "Firefox"))) as "Firefox", count(eval(match(User_Agent, "Chrome"))) as "Chrome", count(eval(match(User_Agent, "Safari"))) as "Safari", count(eval(match(User_Agent, "MSIE"))) as "IE", count(eval(match(User_Agent, "Trident"))) as "Trident", count(eval(NOT match(User_Agent, "Chrome|Firefox|Safari|MSIE|Trident"))) as "Other" | transpose | sort by User_Agent       I tried the above command, it gives all data to "Other". Firefox=0, Chrome=0 IE=0
Hi Splunkers, Good day. I am trying to perform search time masking using a Calculated Field to replace _raw with the required result. This goes fine for me for my particular data of concern. Howeve... See more...
Hi Splunkers, Good day. I am trying to perform search time masking using a Calculated Field to replace _raw with the required result. This goes fine for me for my particular data of concern. However, it goes complex somehow when that particular field in the same event has to be masked another way. Citing an example below to explain more clearly. Masking - 16 digits 2021/01/21 - 01:15 AM <ACT>1234567890123456</ACT> Result: 2021/01/21 - 01:15 AM <ACT>123456######3456</ACT>   However, if I see 15 digits for this field, masking should be 5 ##### rather than 6 for 16digits. Masking - 15 digits 2021/01/25 - 01:15 AM <ACT>987654321012345</ACT> Result: 2021/01/25 - 01:15 AM <ACT>987654#####2345</ACT> Since the same field, _raw, is being worked on. I reckon this is not possible. props.conf [<sourcetype>] EVAL-_raw = replace(_raw,"(\d{6})(\d{5,6})(\d{4})","\1######\3")   Please let me know of your thoughts/suggestions. Thanks in adv. Cheers!
I have the following table. If the number of scg fail on the day is twice that of the previous day, I want to highlight it. How should I do. Hope you can help. Thanks! date scg_fail_number 1... See more...
I have the following table. If the number of scg fail on the day is twice that of the previous day, I want to highlight it. How should I do. Hope you can help. Thanks! date scg_fail_number 1/01 12 1/02 24 1/03 30 1/04 60      
I have received new licenses for Appdynamics and I want to update  licenses without changing the access key in all the servers. I found this article online but need some help on what steps I need to ... See more...
I have received new licenses for Appdynamics and I want to update  licenses without changing the access key in all the servers. I found this article online but need some help on what steps I need to perform for that to happen. https://docs.appdynamics.com/display/PRO44/Controller+Secure+Credential+Store
Hello All, first time user in the commnunity. We currently have a number of users in our Splunk environment using local authentication and LDAP was not configured at initial deployment. Is it still p... See more...
Hello All, first time user in the commnunity. We currently have a number of users in our Splunk environment using local authentication and LDAP was not configured at initial deployment. Is it still possible to convert local users to use LDAP for authentication? OR what do you recommend?
How would I take the results from this search: | rex field=initiatedBy.user.userPrincipalName "ex(?<GUID>\d+)z\@" And populate it into this LDAP search: | ldapsearch domain=DEFAULT search="(&(obje... See more...
How would I take the results from this search: | rex field=initiatedBy.user.userPrincipalName "ex(?<GUID>\d+)z\@" And populate it into this LDAP search: | ldapsearch domain=DEFAULT search="(&(objectClass=user)(exguid=GUID))" | table name
I have a 200 GB/day license installed in the Splunk Enterprise Cluster. The daily usage of license hovers around ~180 constantly in the whole month except mid of the month and end of the month. It re... See more...
I have a 200 GB/day license installed in the Splunk Enterprise Cluster. The daily usage of license hovers around ~180 constantly in the whole month except mid of the month and end of the month. It reaches ~250 GB on those days, like 5 days in a month. What is the best way to accommodate the above use case? 1. To buy 250 GB/day license (Which I think it will be useless for the remaining 25 days because it is constantly less than the 200 GB) 2. Is there any way to scale up and scale down the license in terms of pricing? Thanks in advance!
Hello all, I am currently running into issues with netscaler logs with the following format:  2021-01-28T06:14:09.884506+08:00 10.10.10.10 01/27/2021:14:14:14 hostname I have used the followin... See more...
Hello all, I am currently running into issues with netscaler logs with the following format:  2021-01-28T06:14:09.884506+08:00 10.10.10.10 01/27/2021:14:14:14 hostname I have used the following props to successfully set time format to the second time zone on other heavy forwarders but have been unable to successfully apply it on this heavy forwarder:    TIME_FORMAT = ^\S\s+\S+\s+ TIME_PREFIX = %m/%d/%Y:%H:%M:%S I have also tried using a transforms to strip the original header and used the following configs with those logs: 999.999.999.999 01/27/2021:14:14:14 hostname   TIME_FORMAT = ^\S\s+ TIME_PREFIX = %m/%d/%Y:%H:%M:%S   When going to GUI of HF, and trying to index this file once Splunk says that it fails to parse timestamp and is reverting to modtime of file. I am not sure where the error could be as I copied a working config from a different forwarder. I have also tried more specific regex using the following:   TIME_FORMAT = ^\d{4}\-\d{2}\-\d{2}T\d{2}\:\d{2}\:\d{2}\.\d+\+\d+\:\d+\s+\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\s+   and still receive an error. Both servers are running 8.0.3 and the file is being written to disk on the forwarder with props applied. rewritten the props multiple times and removed all spaces to ensure something wasn't being added by default. When I load the citrix_netscaler sourcetype in getting data in the regex shows up with an error. If I cut and repaste it matches time zone successfully. After saving the errors come backs up. Any advice on this would be appreciated.   
Hey guys, could you please help and clarify this paragraph from the docs: https://docs.splunk.com/Documentation/Splunk/8.1.1/Installation/MigrateaSplunkinstance “When you copy individual bucket ... See more...
Hey guys, could you please help and clarify this paragraph from the docs: https://docs.splunk.com/Documentation/Splunk/8.1.1/Installation/MigrateaSplunkinstance “When you copy individual bucket files, you must make sure that no bucket IDs conflict on the new system. Otherwise, Splunk Enterprise does not start. “ I’m not quite sure how this can happen?
Hello, I am wondering if there is a way in JavaScript to modify attributes of forms/dashboards on the fly. Specifically, if I have a dashboard whose root element look like this:   <form refresh="1... See more...
Hello, I am wondering if there is a way in JavaScript to modify attributes of forms/dashboards on the fly. Specifically, if I have a dashboard whose root element look like this:   <form refresh="100">   Can I edit that dashboard refresh value on-the-fly (maybe based on a user event)? I'm not looking to re-implement refresh (like location.reload()), just change the attribute value. Thanks in advance!
I have an HEC configured as follows: [http://customer] disabled = 0 index = index1 indexes = index1, index2, index3, index4 token = <token> sourcetype = _json however when I test the endpoint ... See more...
I have an HEC configured as follows: [http://customer] disabled = 0 index = index1 indexes = index1, index2, index3, index4 token = <token> sourcetype = _json however when I test the endpoint using the command: curl -k -L https://<mysplunkurl>/collector -H 'Authorization: Splunk <token>' -d '{"sourcetype": "local_cURL test", "event":"Test Event For New Index"}'...I get a successful 200 response, but the event doesn't appear under any of the configured indexes.  Is this the correct configuration to send events to multiple indexes? 
I have a table where the x axis labels are a json object of parameters that were passed into a test. The y axis are a bar chart of min, max, and average durations per parameters. I have a drilldown t... See more...
I have a table where the x axis labels are a json object of parameters that were passed into a test. The y axis are a bar chart of min, max, and average durations per parameters. I have a drilldown that passes the x axis label into a lower chart that can view each individual duration of a specific test given the parameters. The search result comes out as something like this: ``` eventtype="my_event_type"  header.run_id="my_run_id" header.type="type_of_test" payload.parameters="{"some": "json", "blob": "with", "date": "the", "parameters": "defined"}" | mvexpand durations | table ... ``` Splunk doesn't seem to play nice with comparing objects for equality, and I can't compare the fields directly because given the test type I don't know what the parame
the problem i'm currently having: Software team has logs being written to a file of mixed format and structure. I'm trying to use dynamic sourcetypes so that I can place these into sourcetypes and t... See more...
the problem i'm currently having: Software team has logs being written to a file of mixed format and structure. I'm trying to use dynamic sourcetypes so that I can place these into sourcetypes and then do the proper field extractions. I have followed this article: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Bypassautomaticsourcetypeassignment   But it doesn't seem to be working. here is my current config:   props.conf: [source::C/Windows/SysWOW64/SIXPAC/SIXPAC/*.log] TRANSFORMS=SIXPAC = sixpac_service   transforms.conf [sixpac_service] SOURCE_KEY = MetaData: source REGEX = SIXPACService\.(.+)\.(.+)\s FORMAT = sourcetype::SIXPACService.$1.$2 DEST_KEY = MetaData:Sourcetype   Anyone have some ideas as to why this isn't working?  
I have two IDX pointed to a SH a couple of weeks an error started flooding in from Splunkd. It looks to be for metrics.log file, but I cannot seem understand what the error is and have not been able ... See more...
I have two IDX pointed to a SH a couple of weeks an error started flooding in from Splunkd. It looks to be for metrics.log file, but I cannot seem understand what the error is and have not been able to figure out a solution by searching the community forums.  Essentially, the following errors continue to come in about 1000 errors an hour or so. It was only coming from 1 IDX at first, but now its coming from bother IDXs.  Sample errors:   01-25-2021 16:50:16.946 +0000 ERROR STMgr - dir='/opt/splunk/var/lib/splunk/_metrics/db/hot_v1_49' unexpected rc=-104 (kw= sourcetype::splunk_metrics_log, len=31) warm_rc[0,2] from st_txn_put 01-25-2021 16:50:16.946 +0000 ERROR STMgr - dir='/opt/splunk/var/lib/splunk/_metrics/db/hot_v1_49' unexpected rc=-104 (kw= host::iinabqlvtsplidx2, len=23) warm_rc[0,2] from st_txn_put 01-25-2021 16:50:16.946 +0000 ERROR STMgr - dir='/opt/splunk/var/lib/splunk/_metrics/db/hot_v1_49' unexpected rc=-104 (kw= source::/opt/splunk/var/log/introspection/kvstore.log, len=54) warm_rc[0,2] from st_txn_put 01-25-2021 16:50:16.946 +0000 ERROR STMgr - dir='/opt/splunk/var/lib/splunk/_metrics/db/hot_v1_49' unexpected rc=-104 (kw=_catalog::spl.mlog.nullgroup.data.metrics.commands._mergeAuthzCollections.total|CN|O|component|data.$clusterTime.signature.hash.$binary|data.extra_info.note|data.host|data.mem.supported|data.metrics.repl.executor.networkInterface|data.metrics.repl.executor.shuttingDown|data.network.serviceExecutorTaskStats.executor|data.process|data.repl.electionId.$oid|data.repl.hosts|data.repl.ismaster|data.repl.me|data.repl.primary|data.repl.secondary|data.repl.setName|data.repl.tags.all|data.repl.tags.instance|data.security.SSLServerHasCertificateAuthority|data.security.SSLServerSubjectName|data.storageEngine.name|data.storageEngine.persistent|data.storageEngine.readOnly|data.storageEngine.supportsCommittedReads|data.tcmalloc.tcmalloc.formattedString|data.version|datetime|log_level, len=779) warm_rc[0,2] from st_txn_put 01-25-2021 16:50:16.946 +0000 ERROR STMgr - dir='/opt/splunk/var/lib/splunk/_metrics/db/hot_v1_49' unexpected rc=-104 (kw=_catalog::spl.mlog.nullgroup.data.globalLock.currentQueue.total|CN|O|component|data.$clusterTime.signature.hash.$binary|data.extra_info.note|data.host|data.mem.supported|data.metrics.repl.executor.networkInterface|data.metrics.repl.executor.shuttingDown|data.network.serviceExecutorTaskStats.executor|data.process|data.repl.electionId.$oid|data.repl.hosts|data.repl.ismaster|data.repl.me|data.repl.primary|data.repl.secondary|data.repl.setName|data.repl.tags.all|data.repl.tags.instance|data.security.SSLServerHasCertificateAuthority|data.security.SSLServerSubjectName|data.storageEngine.name|data.storageEngine.persistent|data.storageEngine.readOnly|data.storageEngine.supportsCommittedReads|data.tcmalloc.tcmalloc.formattedString|data.version|datetime|log_level, len=763) warm_rc[0,2] from st_txn_put   Basically the same error are repeating over and over again in a similar fasion.