All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

When this is a cluster then the master keeps books which buckets are in which node. When you are shutting down the server then master updates where are primaries secondaries etc. After that it will or... See more...
When this is a cluster then the master keeps books which buckets are in which node. When you are shutting down the server then master updates where are primaries secondaries etc. After that it will order other nodes to start fixing a SF and RF to fulfill current requirements. Basically this means that your backup is not fully compliant when you are trying to restore it. And the same will be happening for all those nodes. I probably do this by removing one node from cluster, clean it, install OS, then splunk and just adding it as a new node into it. Of course this needs some additional space for indexes, but actually your planned way needs it too. There are couple of old postings, how to replace old nodes in cluster, where you see the actual process. r. Ismo
| stats count(actor.displayName) will give you a field called "count(actor.displayName)" not "count" which is why the where command returns no results. Try it like this | stats count(actor.displayN... See more...
| stats count(actor.displayName) will give you a field called "count(actor.displayName)" not "count" which is why the where command returns no results. Try it like this | stats count(actor.displayName) as count | where count < 5
This is the search with some anonymization.   index=index_1 sourcetype=sourcetype_1 field_1 IN ( [ search index=index_2 field_2 IN ( [ search index=index_2 field_2=abcdefg | f... See more...
This is the search with some anonymization.   index=index_1 sourcetype=sourcetype_1 field_1 IN ( [ search index=index_2 field_2 IN ( [ search index=index_2 field_2=abcdefg | fields field_3 | mvcombine field_3 delim=" " | nomv field_3 | dedup field_3 | sort field_3 | return $field_3]) | fields field_3 | sort field_3 | mvcombine field_3 delim=" " | nomv field_3])   The deepest subsearch returns a list of managers that report to a director, 10 names.  The subsearch returns a list of users who report to those managers, 1137 names.  If I run the search like this, I get output.   index=index_1 sourcetype=sourcetype_1 field_1 IN (1137 entries)   I can't find a reason that the first search returns this,  'Regex: regular expression is too large', since there is no command that uses regex.  I can run each subsearch without any issues.  I can't find anything in the _internal index.  Any thoughts on why this is happening or a better search? TIA, Joe  
Hola, hoy solicito su ayuda,    Dado que descargue la VMWARE de Splunt para probarlo y ver el funcionamiento, pero no ha sido posible encontrar en la documentación las 
First time ingesting JSON logs, so need assistance on figuring out why my JSON log ingestion is not auto extracting. Environment: SHC, IDX cluster, typical management servers. I first tested a ... See more...
First time ingesting JSON logs, so need assistance on figuring out why my JSON log ingestion is not auto extracting. Environment: SHC, IDX cluster, typical management servers. I first tested a manual upload of a log sample by going to a SH then settings -> add data -> upload.   When i uploaded a log, the sourcetype _json was automatically selected. In the preview panel, everything looked good, so i saved the sourcetype as foo. i completed the upload into a index=test.  Looked at the data, everything was good, in the " interesting fields " pane on the left had the auto extractions completed. in ../apps/search/local/props.conf, an entry was created... [foo] KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ disabled = false pulldown_type = true BREAK_ONLY_BEFORE = INDEXED_EXTRACTIONS = json I noticed it used INDEXED_EXTRACTIONS, which is not what i wanted (have never used indexed_extractions before), but figured these are just occasional scan logs which were literally just a few kilobytes every now and then, so wasn't a big deal. I copied the above sourcetype stanza to an app in the cluster masters-manager apps folder in an app that i have a bunch of random one off props.conf sourcetype stanzas,  then pushed it out to the IDX cluster.  Then created an inputs.conf  and server class on the DS to push to the particular forwarder that monitors the folder for the appropriate JSON scan logs.   As expected, eventually the scan logs started being indexed and viewable on the Search head. Unfortunately, the auto extractions were not being parsed. The Interesting fields panel on the left only had the default fields. on the Right panel where the logs are the Fields names were highlighted in Red, which i guess means splunk recognizes the field names??  But either way the issue is i had no interesting fields. I figured maybe the issue was on the Search heads i had the " indexed extractions"  set and figure thats probably an indexer setting,    so i commented that out and tried using KV_MODE=json in its place. saved the .conf file and restarted the SH.  But the issue remains.. no interesting fields. The test upload worked just fine and i had interesting fields in the test index,  however when the logs started coming through from the UF,  I no longer had interesting fields despite using the same sourcetype. What am i missing? is there more to ingesting a JSON file then simply just using  kv_mode  or indexed_extractions?  but then why does my test upload work ? here is a sample log:   {"createdAt": "2024-09-04T15:23:12-04:00", "description": "bunch of words.", "detectorId": "text/hardcoded-credentials@v1.0", "detectorName": "Hardcoded credentials", "detectorTags": ["secrets", "security", "owasp-top10", "top25-cwes", "cwe-798", "Text"], "generatorId": "something", "id": "LongIDstring", "remediation": {"recommendation": {"text": "a bunch of text.", "url": "a url"}}, "resource": {"id": "oit-aws-codescan"}, "ruleId": "multilanguage-password", "severity": "Critical", "status": "Open", "title": "CWE-xxx - Hardcoded credentials", "type": "Software and Configuration Checks", "updatedAt": "2024-09-18T10:54:02.916000-04:00", "vulnerability": {"filePath": {"codeSnippet": [{"content": " ftp_site = 'something.com'", "number": 139}, {"content": " ftp_base = '/somesite/'", "number": 140}, {"content": " ftp_filename_ext = '.csv'", "number": 111}, {"content": " ", "number": 111}, {"content": " ftp_username = 'anonymous'", "number": 111}, {"content": " ftp_password = 'a****'", "number": 111}, {"content": "", "number": 111}, {"content": " # -- DOWNLOAD DATA -----", "number": 111}, {"content": " # Put all of the data pulls within a try-except case to protect against crashing", "number": 111}, {"content": "", "number": 148}, {"content": " email_alert_sent = False", "number": 111}], "endLine": 111, "name": "somethingsomething.py", "path": "something.py", "startLine": 111}, "id": "LongIDstring", "referenceUrls": [], "relatedVulnerabilities": ["CWE-xxx"]}}   I appreciate any guidance..
We had some message-parsing issue in IBM Integration Bus message flow after enabling the exit of AppD agent. The AppD header singularityheader injected in the message header caused the error "An inva... See more...
We had some message-parsing issue in IBM Integration Bus message flow after enabling the exit of AppD agent. The AppD header singularityheader injected in the message header caused the error "An invalid XML character (Unicode: 0x73) was found in the prolog of the document" when SOAPRequestNode parses the XML message body . 0x73 is the first letter of header name "singularityheader".  So the flow counted the header singularityheader as beginning part of the message content, which makes the content as invalid XML format.  This happened in the message flows that have SOAP Request node.  Is there anyone had the similar issue? Any advice will be appreciated. 
I removed "(actor.displayName)" from the first "count" command and it works now.
Thank you fro the quick reply. It does not seem to run with individual category as well.  Should we consider update health checks which take me to the Splunk Health Assistant Add-on which i... See more...
Thank you fro the quick reply. It does not seem to run with individual category as well.  Should we consider update health checks which take me to the Splunk Health Assistant Add-on which is archived?
Hi all,  I am trying to install Splunk Security Essentials into a single instance of Splunk with a downloaded file of the app, via the GUI. The documentation does not have any pre-install steps. An... See more...
Hi all,  I am trying to install Splunk Security Essentials into a single instance of Splunk with a downloaded file of the app, via the GUI. The documentation does not have any pre-install steps. Any suggestions would be welcome thanks.    Splunk 9.3.1 Splunk Security Essentials 3.8.0 Error:  There was an error processing the upload. Error during app install: failed to extract app from /tmp/tmp6xz06m51 to /opt/splunk/var/run/splunk/bundle_tmp/7364272378fc0528: No such file or directory  
I'm trying to create an alert. The alert's query ends with " | stats values(*) as * by actor.displayName | stats count(actor.displayName)". I want to add the clause, " | where count > 5" at the ... See more...
I'm trying to create an alert. The alert's query ends with " | stats values(*) as * by actor.displayName | stats count(actor.displayName)". I want to add the clause, " | where count > 5" at the end of the query. To verify that the query would work, I changed it "| where count < 5", but I'm getting no results.  
Does it run anything if you select individual category instead of all categories? If its running with fewer categories it could an issue related to load on the DMC to run all categories at once. W... See more...
Does it run anything if you select individual category instead of all categories? If its running with fewer categories it could an issue related to load on the DMC to run all categories at once. We had a similar kinda bug in the previous 8.2 versions but not on V9 as I see on my side.      
Probably because you gave an incomplete description of the events you are working with, the fields that have already been extracted, the SPL you used to get those results, and what your expected resu... See more...
Probably because you gave an incomplete description of the events you are working with, the fields that have already been extracted, the SPL you used to get those results, and what your expected result should look like. We can only work with what you provide. We do not have access to your environment, so we are guessing most of the time.
I am trying to run the Health check on the DMC. Health check dashboard loads fine from the checklist.conf as per the default and local directory. Our splunk version is 9.3.0. After clicking the sta... See more...
I am trying to run the Health check on the DMC. Health check dashboard loads fine from the checklist.conf as per the default and local directory. Our splunk version is 9.3.0. After clicking the start button it gets stuck at 0%. can i know what could be this issue?  
Hello! here is the document which explains using inputs. Please expand the code and look out here: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/DashStudio/inputs "inputs": { "i... See more...
Hello! here is the document which explains using inputs. Please expand the code and look out here: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/DashStudio/inputs "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, This is the link for Link to a report: https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/DashStudio/linkURL#Link_to_a_report If none of these are helping you out, please try creating your dashboard in classic and convert into studio, you might be able to find the difference.  Please UpVote if this helps.  
Hi @gcusello  Thank you for replying on this post. I am also interested on detecting SQL injections. However, the first two links seems to be outdated and no longer referring to SQL injections su... See more...
Hi @gcusello  Thank you for replying on this post. I am also interested on detecting SQL injections. However, the first two links seems to be outdated and no longer referring to SQL injections subject. https://answers.splunk.com/app/questions/1528.html http://blogs.splunk.com/2013/05/15/sql-injection/ Could you please update them ? 
Below is how an inputs.conf would look like. [http://adt] disabled = false sourcetype =adt:audit token = 977xx0B5-E5xx-4xx1-A894-B5DA75XX3A31 indexes =adt_audit index =adt_audit  For crea... See more...
Below is how an inputs.conf would look like. [http://adt] disabled = false sourcetype =adt:audit token = 977xx0B5-E5xx-4xx1-A894-B5DA75XX3A31 indexes =adt_audit index =adt_audit  For creating a token you can use  token generator tools like thishttps://www.uuidgenerator.net/  -  Each token has a unique value, which is a 128-bit number that is represented as a 32-character globally unique identifier (GUID). Agents and clients use a token to authenticate their connections to HEC.  Another way is can create something from web via data inputs >http> and it get saved on the etc/apps/splunk_http_input/local/inputs.conf to see what it looks like.   Hope this helps, please upvote or solved if this solution is helpful.
Hi @sverdhan , Go in [Settings > Licensing > License Usage > Previous 60 days > Split by Sourcetype] and you'll have your search that will be: index=_internal [`set_local_host`] source=*license_usa... See more...
Hi @sverdhan , Go in [Settings > Licensing > License Usage > Previous 60 days > Split by Sourcetype] and you'll have your search that will be: index=_internal [`set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by st fixedrange=false | fields - _timediff | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] Ciao. Giuseppe
Hi @PickleRick  I accepted your previous suggestion using eventstats as solution Thank you for your help.   Thanks for bringing me into attention that eventstats also have memory limitation, anot... See more...
Hi @PickleRick  I accepted your previous suggestion using eventstats as solution Thank you for your help.   Thanks for bringing me into attention that eventstats also have memory limitation, another limits.conf that only admin can modify       The default setting is max_mem_usage_mb=200MB.  How do I know if my dataset will ever hit 200 or not? Does streamstats have limitation? How do I use streamstats in my case?    When I changed eventstats to streamstats, it will show incremental data, instead of total, so I need to figure out on how to filter out if it's possible. Thanks Streamstats result dc ip location name dc    ip location name 1 1.1.1.1 location-1 name0 2 1.1.1.1 location-1 name1 1 1.1.1.2 location-2 name2 1 1.1.1.3 location-3 name0 2 1.1.1.3 location-3 name3 1 1.1.1.4 location-4 name4 2 1.1.1.4 location-4 name4b 1 1.1.1.5 location-0 name0 1 1.1.1.6 location-0 name0 0 1.1.1.7 location-7  
Hi @sbhatnagar88 , no, it's correct: in linux you can tar and untar the full splunk home directory. Remember only to mount all the partitions in the same mount point of the original, if you cannot,... See more...
Hi @sbhatnagar88 , no, it's correct: in linux you can tar and untar the full splunk home directory. Remember only to mount all the partitions in the same mount point of the original, if you cannot, you have to modify the $SPLUNK_DB parameter in splunk-launch.conf. Ciao. Giuseppe
Getting this result, i dont see tag name in list 20 Per Page Format Preview     _time NULL 2024-10-02 15:45:39.507 2 2024-10-02 15:45:39.508 1 2024-10-02 15:45:39.516 1 ... See more...
Getting this result, i dont see tag name in list 20 Per Page Format Preview     _time NULL 2024-10-02 15:45:39.507 2 2024-10-02 15:45:39.508 1 2024-10-02 15:45:39.516 1 2024-10-02 15:46:14.196 6 2024-10-02 15:46:14.199 3