All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I have have a list of events that contain a customer ID. I'm trying to detect when I have a sequence of events with incremental changes to the ID Example: - event A - ID0 - event B - ID1 - e... See more...
Hi, I have have a list of events that contain a customer ID. I'm trying to detect when I have a sequence of events with incremental changes to the ID Example: - event A - ID0 - event B - ID1 - event C- ID2 - event D - ID3   I might have other events between these increments that could have unrelated IDs (i.e: event A ID0 - event H ID 22, event B ID1) I've tried using | streamstats current=f last(CustomerID) as prev_CustomerID | eval increment = CustomerID - prev_CustomerID but without any luck.   Do you guys know a way this could be achieved ?            
sure. attached the valueCount and Pct. also the number of events:  1,380,350 events
Hi I cannot see any issues on Release Notes. Have you already made support ticket to Splunk? If not, could you do it, so we can get this as a known issue! r. Ismo
Hello everyone, I m using Splunk DB connect to get data from DB, I get three values from the DB as follow :  - Event's ID - Json Data - creation data of events There is the result, how can I... See more...
Hello everyone, I m using Splunk DB connect to get data from DB, I get three values from the DB as follow :  - Event's ID - Json Data - creation data of events There is the result, how can I remove the "rawjson=" to ba able to get this data on json format ?     Regards,
@deodeshm the initial way to test connectivity with any app is to press the "Test Connectivity" button. Whilst this will not send an email it will test the path/capability is available for when it ca... See more...
@deodeshm the initial way to test connectivity with any app is to press the "Test Connectivity" button. Whilst this will not send an email it will test the path/capability is available for when it can send a full email.  Additionally you could run the `send_email` action manually to test that it sends a proper email.    -- Hope this helped. If so please mark as a solution for future enquirers. Happy SOARing! --
I met the same error during the DB Connect set up. The solution that works in my situation is to check  $SPLUNK_HOME/etc/apps/splunk_app_db_connect/metadata/local.meta There are definitions for  id... See more...
I met the same error during the DB Connect set up. The solution that works in my situation is to check  $SPLUNK_HOME/etc/apps/splunk_app_db_connect/metadata/local.meta There are definitions for  identities and db_connections. Make sure the "export = system" instead of  "export = none". It should look like this: [identities/xxxxxx] access = read : [ * ], write : [ admin, db_connect_admin ] export = system owner = xxxxxx ...... [db_connections/xxxxxx] access = read : [ * ], write : [ admin, db_connect_admin ] export = system owner = xxxxxx ......
Hi If I recall right this needs only git client. Basically this means, that you could use it with (almost?) any git server which support basic git commands. ## What is required for this application... See more...
Hi If I recall right this needs only git client. Basically this means, that you could use it with (almost?) any git server which support basic git commands. ## What is required for this application to work with a remote git repository? The following assumptions are made: - git is accessible on the command line, this has been tested on Linux & Windows with git for Windows installed - git is using an SSH-based URL and the remote git repository allows the machine running the SplunkVersionControl application to remotely access the repository without a username/password prompt (i.e. SSH keys are in use) - git will work from the user running the Splunk process over SSH, note that on Windows this will be the system account by default, on Linux the splunk user - the git repository is dedicated to this particular backup as the root / top level of the git repo will be used to create backups r. Ismo 
When one of the month data is not available in one of the name.. i am seeing empty space in between bars.. is there any way we can avoid that space
Hi Usually you must restart splunk for hashing those passwords. r. Ismo
Hi Community,   The sslPassword in Seach Head $SPLUNK_HOME/etc/system/local/web.conf not being hashed. Other .conf password like server.conf & authentication.conf in $SPLUNK_HOME/etc/system/local/... See more...
Hi Community,   The sslPassword in Seach Head $SPLUNK_HOME/etc/system/local/web.conf not being hashed. Other .conf password like server.conf & authentication.conf in $SPLUNK_HOME/etc/system/local/ are hashed. I have changed the password recently in web.conf recently.   Anyone have any idea?
You need to be precise in data description.  I assume that the six characters starting with 999 are bounded by underscore (_), beginning of the string, or end of the string.  Something like the follo... See more...
You need to be precise in data description.  I assume that the six characters starting with 999 are bounded by underscore (_), beginning of the string, or end of the string.  Something like the following would do | rex field=field "^([^_]+_)*(?<six_char>999.{3})(_[^_]+)*$" Here is an emulation you can play with and compare with real data. | makeresults | fields - _time | eval field=mvappend("blah_999ars_blah_blah", "blah_blah_999cha_blah", "9996ch_blah_blah_blah", "blah_blah_blah_999har") | mvexpand field ``` data emulation above ```
Hi Pradeep,   Thank you for providing the Curl command. I noticed that you've combined two different authorization methods in the same command, "Password-Based" and "Bearer Token." ... See more...
Hi Pradeep,   Thank you for providing the Curl command. I noticed that you've combined two different authorization methods in the same command, "Password-Based" and "Bearer Token." To proceed, please choose either "UsernamePassword" or "Bearer Token" for this command. Here is the revised command;  please try it and inform me of the results. curl --user <username>@account-name:<password> "https://<controller page>/controller/rest/applications" curl -H "Authorization:Bearer <ACCESS TOKEN> "https://<controller page>/controller/rest/applications"
Hi as you have had some I/O errors on your /opt/cold there is possibility that there are some buckets which are corrupted and cannot used anymore. You should find from _internal -log what cause that... See more...
Hi as you have had some I/O errors on your /opt/cold there is possibility that there are some buckets which are corrupted and cannot used anymore. You should find from _internal -log what cause that issue. Just search those buckets from it which you have on MC's view of SF&RF not met and in fixing task. After you have identified those reasons you could decide how to proceed. Maybe just remove primary bucket and use your replicas or something else, but this is totally dependent on the reason what you found from internal. What are your SF & RF and have you single site or multisite cluster? Basically it should't need a data rebalancing unless your bucket count has totally unbalanced between indexers. You could see that e.g. via REST calls. r. Ismo
Hello,  Were you able to solve this? if yes please let us know.
Hi @bmanikya, does my regex work? Ciao. Giuseppe
Hi @sandeepreddy947, having an infrastructure like your (45 Indexers), the only thing is to open a ticket to Splunk Support. ciao. Giuseppe
Hi you could try this ... | rex field=field "(?<foo>999[a-zA-Z0-9]{3})_*" Then you have this in field foo. You should change [a-ZA-Z0-9] if those 3 characters could be something else than those. ... See more...
Hi you could try this ... | rex field=field "(?<foo>999[a-zA-Z0-9]{3})_*" Then you have this in field foo. You should change [a-ZA-Z0-9] if those 3 characters could be something else than those. r. Ismo 
You should not use foreach *.  tag::event is a meta field and foreach will not handle those.  It is quite obvious that your data also contain other irrelevant fields.  If you know those tag names, en... See more...
You should not use foreach *.  tag::event is a meta field and foreach will not handle those.  It is quite obvious that your data also contain other irrelevant fields.  If you know those tag names, enumerate it. (Read the document.) | foreach flag1 flag2 flag3 ... flagX [eval trueflag = mvappend(trueflag, if(<<FIELD>> == "true", "<<FIELD>>", null()))] | stats count by trueflag Alternatively, you probably do not care about other fields.  Remove them so foreach will not be bombed. | fields loggingObject.responseJson | spath input=loggingObject.responseJson | foreach * [eval trueflag = mvappend(trueflag, if(<<FIELD>> == "true", "<<FIELD>>", null()))] | stats count by trueflag
Hello Experts, I'm trying to work out how to strip down a field  field="blah_6chars_blah_blah" the 6chars is what I want to extract and the 6 chars are always prefixed with 999. the 6 chars prefi... See more...
Hello Experts, I'm trying to work out how to strip down a field  field="blah_6chars_blah_blah" the 6chars is what I want to extract and the 6 chars are always prefixed with 999. the 6 chars prefixed with 999 might be in a different place in the field.  i.e.  blah_blah_6chars_blah 6chars example value=999aaa so the regex should find  all occurences of 999 in the field and extract the 999 and the next 3 chars and create an additional field with the result Thanks
Hi your changes apply only to new events not to those which are already indexed. r. Ismo