All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This solved my issue
I believe there is something wrong on your DMC set up. Please only enable Distributed Mode on the monitoring console instead of search head and indexers.  You can also try creating new health check ... See more...
I believe there is something wrong on your DMC set up. Please only enable Distributed Mode on the monitoring console instead of search head and indexers.  You can also try creating new health check item with some query and see if that works.  /en-US/app/splunk_monitoring_console/monitoringconsole_check_list I would encourage you to open a support case or on ondemand request.
1. We still have no idea what your raw data looks like. For example - how are we supposed to know whether the log.message path is the right one? I suppose it is because you're getting _some_ result b... See more...
1. We still have no idea what your raw data looks like. For example - how are we supposed to know whether the log.message path is the right one? I suppose it is because you're getting _some_ result but we have no way to know it. 2. Your initial search is very ineffective. Wildcards in the middle of search terms can give strange and inconsistent results and generally wildcards in a place other than the end of a search term slow your search. 3. You're getting some result but you're not showing us anything. How are we supposed to even understand what you're getting? 4. Don't get involved in this "177 events" number. It's just all events that have been matched by your initial search. 5. There are two main techniques of debugging searches - either you start from the start and add commands one by one until the results stop making sense or you start with the whole search and remove commands one by one until the results start making sense. 6. Honestly, I have no idea what you're trying to achieve with this mvzip/mvexpand/regex magic.
Hello, what to do when my kvstore folder just vanished? It do not create again after restart. Tried to create new folder with splunk privileges but it wont help. Do u have any idea how to reapir this... See more...
Hello, what to do when my kvstore folder just vanished? It do not create again after restart. Tried to create new folder with splunk privileges but it wont help. Do u have any idea how to reapir this? 
I have tried all the recommendations in this thread and non of them works. I upgraded from 9.0 to 9.3, but the clients are not phoning in.
I have the following setting in splunk_monitoring_console_assets.conf at location /etc/apps/splunk_monitoring_console/local/ [settings] disabled = 0 I have same setting on the DMC, SH and indexers... See more...
I have the following setting in splunk_monitoring_console_assets.conf at location /etc/apps/splunk_monitoring_console/local/ [settings] disabled = 0 I have same setting on the DMC, SH and indexers. everything works except DMC. I have the following roles for the DMC. Should any other roles to be enabled.  
First Lets find the transforms.conf by running the below btool. opt/splunk/bin/splunk btool transforms list --debug | grep sourcetype_1 Then you can try something like this on your transfor... See more...
First Lets find the transforms.conf by running the below btool. opt/splunk/bin/splunk btool transforms list --debug | grep sourcetype_1 Then you can try something like this on your transforms.conf from the above the app? splunk@idx1:/opt/splunk/bin$ /opt/splunk/bin/splunk btool validate-regex /opt/splunk/etc/apps/learned/local/transforms.conf --debug Bad regex value: '-zA-Z0-9_\.]+)=\"?([a-zA-Z0-9_\.:-]+)', of param: transforms.conf / [metrics_field_extraction] / REGEX; why: unmatched closing parenthesis      
I have the following setting in splunk_monitoring_console_assets.conf at location /etc/apps/splunk_monitoring_console/local/ [settings] disabled = 0 I have same setting on the DMC, SH and indexers... See more...
I have the following setting in splunk_monitoring_console_assets.conf at location /etc/apps/splunk_monitoring_console/local/ [settings] disabled = 0 I have same setting on the DMC, SH and indexers. everything works except DMC. I have the following roles for the DMC. Should any other roles to be enabled.  
@sainag_splunkThe command doesn't return anything.  Is there supposed to be an index or sourcetype in the command?
Hello, I'm trying to achieve a result set which can be used in an alert later on. Basically when search is executed, its should look for field named "state" and evaluate with its value from two h... See more...
Hello, I'm trying to achieve a result set which can be used in an alert later on. Basically when search is executed, its should look for field named "state" and evaluate with its value from two hours ago for the same corresponding record, which is field name "pv_number" and if the value of field did not change between "now" and "two hours ago", capture it as table showing previous state and current state along with previous time and current time. Any help is greatly appreciated. Thanks much! 
Hello! There could be a regex defined on that sourcetype. Please run a btool on the backend for that sourcetype and figure out if you find any spaces or typos in that regex, then try to remove them. ... See more...
Hello! There could be a regex defined on that sourcetype. Please run a btool on the backend for that sourcetype and figure out if you find any spaces or typos in that regex, then try to remove them. /opt/splunk/bin/splunk btool validate-regex --debug I would check out the search.log instead on whats happening there. Hope this helps.
I don't see an issue on my lab with the same version.  The only reason I could think of is make sure there are no "unconfigured instances" on your monitoring console. Make sure you set this and ... See more...
I don't see an issue on my lab with the same version.  The only reason I could think of is make sure there are no "unconfigured instances" on your monitoring console. Make sure you set this and apply changes as per this doc. https://docs.splunk.com/Documentation/Splunk/9.3.1/DMC/Configureindistributedmode#Reset_server_roles_after_restart Hope this helps and resolves.
Thank you for sharing your detailed process and the issue you're encountering with JSON log ingestion. Your testing approach was thorough, but there are a few key points to address: Props.conf loca... See more...
Thank you for sharing your detailed process and the issue you're encountering with JSON log ingestion. Your testing approach was thorough, but there are a few key points to address: Props.conf location: The primary parsing settings should be on the indexers, not the search heads. For JSON data, you typically only need minimal settings on the search head. Search head settings: On the search head, you can simplify your props.conf to just: [yoursourcetype] KV_MODE = JSON This tells Splunk to parse the JSON at search time, which should give you the field extractions you're looking for. In order to onboard this properly you can also set MAGIC6 props on your indexers. https://community.splunk.com/t5/Getting-Data-In/props-conf/m-p/426134   Try to run the below search to figure out what app is taking precedence.   | rest splunk_server=local /services/configs/conf-props/YOURSOURCEYPE| transpose | search column=eai:acl.app   Please UpVote/Solved if this helps.
Hi @Henry.Tellez, Did you see the reply above from @Tiong.Koh? If that reply helped, can you please click the "Accept as Solution" button? This helps let the community know this question was answer... See more...
Hi @Henry.Tellez, Did you see the reply above from @Tiong.Koh? If that reply helped, can you please click the "Accept as Solution" button? This helps let the community know this question was answered. If it didn't help, reply back and continue the conversation. 
I'm looking for application availability and not server uptime.
When this is a cluster then the master keeps books which buckets are in which node. When you are shutting down the server then master updates where are primaries secondaries etc. After that it will or... See more...
When this is a cluster then the master keeps books which buckets are in which node. When you are shutting down the server then master updates where are primaries secondaries etc. After that it will order other nodes to start fixing a SF and RF to fulfill current requirements. Basically this means that your backup is not fully compliant when you are trying to restore it. And the same will be happening for all those nodes. I probably do this by removing one node from cluster, clean it, install OS, then splunk and just adding it as a new node into it. Of course this needs some additional space for indexes, but actually your planned way needs it too. There are couple of old postings, how to replace old nodes in cluster, where you see the actual process. r. Ismo
| stats count(actor.displayName) will give you a field called "count(actor.displayName)" not "count" which is why the where command returns no results. Try it like this | stats count(actor.displayN... See more...
| stats count(actor.displayName) will give you a field called "count(actor.displayName)" not "count" which is why the where command returns no results. Try it like this | stats count(actor.displayName) as count | where count < 5
This is the search with some anonymization.   index=index_1 sourcetype=sourcetype_1 field_1 IN ( [ search index=index_2 field_2 IN ( [ search index=index_2 field_2=abcdefg | f... See more...
This is the search with some anonymization.   index=index_1 sourcetype=sourcetype_1 field_1 IN ( [ search index=index_2 field_2 IN ( [ search index=index_2 field_2=abcdefg | fields field_3 | mvcombine field_3 delim=" " | nomv field_3 | dedup field_3 | sort field_3 | return $field_3]) | fields field_3 | sort field_3 | mvcombine field_3 delim=" " | nomv field_3])   The deepest subsearch returns a list of managers that report to a director, 10 names.  The subsearch returns a list of users who report to those managers, 1137 names.  If I run the search like this, I get output.   index=index_1 sourcetype=sourcetype_1 field_1 IN (1137 entries)   I can't find a reason that the first search returns this,  'Regex: regular expression is too large', since there is no command that uses regex.  I can run each subsearch without any issues.  I can't find anything in the _internal index.  Any thoughts on why this is happening or a better search? TIA, Joe  
Hola, hoy solicito su ayuda,    Dado que descargue la VMWARE de Splunt para probarlo y ver el funcionamiento, pero no ha sido posible encontrar en la documentación las 
First time ingesting JSON logs, so need assistance on figuring out why my JSON log ingestion is not auto extracting. Environment: SHC, IDX cluster, typical management servers. I first tested a ... See more...
First time ingesting JSON logs, so need assistance on figuring out why my JSON log ingestion is not auto extracting. Environment: SHC, IDX cluster, typical management servers. I first tested a manual upload of a log sample by going to a SH then settings -> add data -> upload.   When i uploaded a log, the sourcetype _json was automatically selected. In the preview panel, everything looked good, so i saved the sourcetype as foo. i completed the upload into a index=test.  Looked at the data, everything was good, in the " interesting fields " pane on the left had the auto extractions completed. in ../apps/search/local/props.conf, an entry was created... [foo] KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ disabled = false pulldown_type = true BREAK_ONLY_BEFORE = INDEXED_EXTRACTIONS = json I noticed it used INDEXED_EXTRACTIONS, which is not what i wanted (have never used indexed_extractions before), but figured these are just occasional scan logs which were literally just a few kilobytes every now and then, so wasn't a big deal. I copied the above sourcetype stanza to an app in the cluster masters-manager apps folder in an app that i have a bunch of random one off props.conf sourcetype stanzas,  then pushed it out to the IDX cluster.  Then created an inputs.conf  and server class on the DS to push to the particular forwarder that monitors the folder for the appropriate JSON scan logs.   As expected, eventually the scan logs started being indexed and viewable on the Search head. Unfortunately, the auto extractions were not being parsed. The Interesting fields panel on the left only had the default fields. on the Right panel where the logs are the Fields names were highlighted in Red, which i guess means splunk recognizes the field names??  But either way the issue is i had no interesting fields. I figured maybe the issue was on the Search heads i had the " indexed extractions"  set and figure thats probably an indexer setting,    so i commented that out and tried using KV_MODE=json in its place. saved the .conf file and restarted the SH.  But the issue remains.. no interesting fields. The test upload worked just fine and i had interesting fields in the test index,  however when the logs started coming through from the UF,  I no longer had interesting fields despite using the same sourcetype. What am i missing? is there more to ingesting a JSON file then simply just using  kv_mode  or indexed_extractions?  but then why does my test upload work ? here is a sample log:   {"createdAt": "2024-09-04T15:23:12-04:00", "description": "bunch of words.", "detectorId": "text/hardcoded-credentials@v1.0", "detectorName": "Hardcoded credentials", "detectorTags": ["secrets", "security", "owasp-top10", "top25-cwes", "cwe-798", "Text"], "generatorId": "something", "id": "LongIDstring", "remediation": {"recommendation": {"text": "a bunch of text.", "url": "a url"}}, "resource": {"id": "oit-aws-codescan"}, "ruleId": "multilanguage-password", "severity": "Critical", "status": "Open", "title": "CWE-xxx - Hardcoded credentials", "type": "Software and Configuration Checks", "updatedAt": "2024-09-18T10:54:02.916000-04:00", "vulnerability": {"filePath": {"codeSnippet": [{"content": " ftp_site = 'something.com'", "number": 139}, {"content": " ftp_base = '/somesite/'", "number": 140}, {"content": " ftp_filename_ext = '.csv'", "number": 111}, {"content": " ", "number": 111}, {"content": " ftp_username = 'anonymous'", "number": 111}, {"content": " ftp_password = 'a****'", "number": 111}, {"content": "", "number": 111}, {"content": " # -- DOWNLOAD DATA -----", "number": 111}, {"content": " # Put all of the data pulls within a try-except case to protect against crashing", "number": 111}, {"content": "", "number": 148}, {"content": " email_alert_sent = False", "number": 111}], "endLine": 111, "name": "somethingsomething.py", "path": "something.py", "startLine": 111}, "id": "LongIDstring", "referenceUrls": [], "relatedVulnerabilities": ["CWE-xxx"]}}   I appreciate any guidance..