All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @smichalski , I’m a Community Moderator in the Splunk Community. This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend... See more...
Hi @smichalski , I’m a Community Moderator in the Splunk Community. This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Thanks!
hi, try | makeresults | eval date="Nov 16 10:00:57 2024" | eval epoch_time=strptime(date, "%b %d %H:%M:%S %Y") | fields epoch_time regards, Abraham
Recently I have encountered an issue while rebuilding data on one of our indexers. During this process I needed to execute the following command: /opt/splunk/bin/splunk _internal call /data/indexes/... See more...
Recently I have encountered an issue while rebuilding data on one of our indexers. During this process I needed to execute the following command: /opt/splunk/bin/splunk _internal call /data/indexes/main/rebuild-metadata-and manifests However upon running, I was prompted for Splunk Username and Password. Typically we used the credentials created at Web GUI. But since the usually the indexers Web GUI is set to false most of the time, so there is no GUI username and password available on them. I tried using my Search Head Username and Password, followed by the OS Username and Password, but neither worked. After some research, I discovered that every Splunk instance includes a default admin user created during installation: Username: admin Password: changeme but it doesn't work for me. Here is the procedure that finally worked for me, so to reset the password for the admin user Access the indexers cli, here the passwd file exist in: /opt/splunk/etc/ Rename that file to passwd.bak Create a new file with the name: user-seed.conf, in location: /opt/splunk/etc/system/local/ In this file use the below configuration: [user_info] USERNAME = admin PASSWORD = <password of your choice> Restart the Splunk service on that indexer using /opt/splunk/bin/splunk restart This will generate a new passwd file. You can now use the admin user with the password you set in step 3.   After the resting the password, I've used the initial command, using the updated admin credentials and it worked.
Team, wanted to convert below time into epoc time. Please help. time - Nov 16 10:00:57 2024
Depending on your deployment, it might be worth considering switching to the Microsoft JDBC driver which is suggested in Splunk's documentation. However JTDS might still work. By default, JTDS does ... See more...
Depending on your deployment, it might be worth considering switching to the Microsoft JDBC driver which is suggested in Splunk's documentation. However JTDS might still work. By default, JTDS does not use SSL for the connection which is causing this error. Append the following to the JDBC URL in the connection configuration page: ;ssl=require;   Feel free to share your connection string, redacted as appropriate.
It just suited my work sequence...
EXAMPLE DATA:   { "sourcetype": "testoracle_sourcetype", "data": { "cdb_tbs_check": [ { "check_error": "", "check_name": "cdb_tbs_check", "check_status": "OK", "current_use_mb": "1355", "percent_us... See more...
EXAMPLE DATA:   { "sourcetype": "testoracle_sourcetype", "data": { "cdb_tbs_check": [ { "check_error": "", "check_name": "cdb_tbs_check", "check_status": "OK", "current_use_mb": "1355", "percent_used": "2", "tablespace_name": "SYSTEM", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "cdb_tbs_check", "check_status": "OK", "current_use_mb": "23596", "percent_used": "36", "tablespace_name": "SYSAUX", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "cdb_tbs_check", "check_status": "OK", "current_use_mb": "29", "percent_used": "0", "tablespace_name": "UNDOTBS1", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "cdb_tbs_check", "check_status": "OK", "current_use_mb": "4", "percent_used": "0", "tablespace_name": "USERS", "total_physical_all_mb": "65536" } ], "fra_check": [ { "check_error": "", "check_name": "fra_check", "check_status": "OK", "flash_in_gb": "40", "flash_reclaimable_gb": "0", "flash_used_in_gb": "1.5", "percent_of_space_used": "3.74" } ], "global_parameters": { "check_error": "", "check_name": "General_parameters", "check_status": "OK", "database_major_version": "19", "database_minor_version": "0", "database_name": "C2N48617", "database_version": "19.0.0.0.0", "host_name": "flosclnrhv03.pharma.aventis.com", "instance_name": "C2N48617", "script_version": "1.0" }, "pdb_tbs_check": [ { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "76", "pdb_name": "O1S48633", "percent_used": "0", "tablespace_name": "UNDOTBS1", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "5", "pdb_name": "O1S48633", "percent_used": "0", "tablespace_name": "TOOLS", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "21", "pdb_name": "O1NN2467", "percent_used": "0", "tablespace_name": "UNDOTBS1", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "627", "pdb_name": "O1NN2467", "percent_used": "1", "tablespace_name": "SYSAUX", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "784", "pdb_name": "O1S48633", "percent_used": "1", "tablespace_name": "SYSTEM", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "1547", "pdb_name": "O1NN8944", "percent_used": "2", "tablespace_name": "SYSAUX", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "1149", "pdb_name": "O1S48633", "percent_used": "2", "tablespace_name": "USERS", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "58", "pdb_name": "O1NN8944", "percent_used": "0", "tablespace_name": "UNDOTBS1", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "7804", "pdb_name": "O1S48633", "percent_used": "12", "tablespace_name": "SYSAUX", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "1176", "pdb_name": "O1NN8944", "percent_used": "4", "tablespace_name": "USERS", "total_physical_all_mb": "32767" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "378", "pdb_name": "O1NN8944", "percent_used": "1", "tablespace_name": "INDX", "total_physical_all_mb": "32767" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "705", "pdb_name": "O1NN8944", "percent_used": "1", "tablespace_name": "SYSTEM", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "623", "pdb_name": "O1NN2467", "percent_used": "1", "tablespace_name": "SYSTEM", "total_physical_all_mb": "65536" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "3", "pdb_name": "O1S48633", "percent_used": "0", "tablespace_name": "AUDIT_TBS", "total_physical_all_mb": "8192" }, { "check_error": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "128", "pdb_name": "O1S48633", "percent_used": "0", "tablespace_name": "USRINDEX", "total_physical_all_mb": "65536" } ], "processes": { "check_error": "", "check_name": "processes", "check_status": "OK", "process_current_value": "294", "process_limit": "1000", "process_percent": "29.4" }, "queue_mem_check": [ { "check_error": "", "check_name": "queue_mem_check", "check_status": "OK", "queue_name": "AQ$_Q_PIWORKTASK_TAB_E", "queue_owner": "LIVE2459_VAL", "queue_sharable_mem": "4072" }, { "check_error": "", "check_name": "queue_mem_check", "check_status": "OK", "queue_name": "AQ$_Q_TASKREPORTWORKTASK_TAB_E", "queue_owner": "LIVE2459_VAL", "queue_sharable_mem": "4072" }, { "check_error": "", "check_name": "queue_mem_check", "check_status": "OK", "queue_name": "AQ$_Q_LABELWORKTASK_TAB_E", "queue_owner": "LIVE2459_VAL", "queue_sharable_mem": "4072" }, { "check_error": "", "check_name": "queue_mem_check", "check_status": "OK", "queue_name": "AQ$_Q_PIPROCESS_TAB_E", "queue_owner": "LIVE2459_VAL", "queue_sharable_mem": "4072" }, { "check_error": "", "check_name": "queue_mem_check", "check_status": "OK", "queue_name": "AQ$_ALERT_QT_E", "queue_owner": "SYS", "queue_sharable_mem": "4032" }, { "check_error": "", "check_name": "queue_mem_check", "check_status": "OK", "queue_name": "ALERT_QUE", "queue_owner": "SYS", "queue_sharable_mem": "0" } ], "script_version": "1.0", "sessions": { "check_error": "", "check_name": "sessions", "check_status": "OK", "sessions_current_value": "293", "sessions_limit": "1536", "sessions_percent": "19.08" } } }
i am trying to upload json file using UI in Splunk cloud and applying settings for parsing as below but data is coming as a single event  [custom_json_sourcetype] INDEXED_EXTRACTIONS = json SHOULD... See more...
i am trying to upload json file using UI in Splunk cloud and applying settings for parsing as below but data is coming as a single event  [custom_json_sourcetype] INDEXED_EXTRACTIONS = json SHOULD_LINEMERGE = false KV_MODE = json LINE_BREAKER = },\s*{ please advise correct settings to apply under sourcetypes in web when uploading here is the data:   {     "sourcetype": "testoracle_sourcetype",     "data": {         "cdb_tbs_check": [             {                 "check_error": "",                 "check_name": "cdb_tbs_check",                 "check_status": "OK",                 "current_use_mb": "1355",                 "percent_used": "2",                 "tablespace_name": "SYSTEM",                 "total_physical_all_mb": "65536"             },             {                 "check_error": "",                 "check_name": "cdb_tbs_check",                 "check_status": "OK",                 "current_use_mb": "23596",                 "percent_used": "36",                 "tablespace_name": "SYSAUX",                 "total_physical_all_mb": "65536"             },             {                 "check_error": "",                 "check_name": "cdb_tbs_check",                 "check_status": "OK",                 "current_use_mb": "29",                 "percent_used": "0",                 "tablespace_name": "UNDOTBS1",                 "total_physical_all_mb": "65536"             },            
Stumbled across this old query  as I need exact same functionality for inverse_transform() after SS pre-processing as my data vary in scale/level. Is there any plan of adding this any time soonish? ... See more...
Stumbled across this old query  as I need exact same functionality for inverse_transform() after SS pre-processing as my data vary in scale/level. Is there any plan of adding this any time soonish? Thanks, MCW
My apologies for bringing that old topic up again, but it's the only one about the error message and I stumbled across it while investigating on the same issue (different app version, but not the lat... See more...
My apologies for bringing that old topic up again, but it's the only one about the error message and I stumbled across it while investigating on the same issue (different app version, but not the latest, so it might be fixed). In summary, I could trace the problem back to local.meta, which listed a user as object owner whose Splunk account had been removed. Solution was to either re-assign the object to a valid user, or remove the owner entry (= assigns it to nobody).
Okay, everything should be working then... You can check which search peers returned the event data using the following search: index=* | stats values(splunk_server) by index So long as your se... See more...
Okay, everything should be working then... You can check which search peers returned the event data using the following search: index=* | stats values(splunk_server) by index So long as your search factor is met, the values of splunk_server should be the local peer names depending on the SH that you run that from. You can also check the search logs in the Job Inspector. What is your overall goal here? As I say, search affinity is not a security control and is only designed to make searches more efficient. All data in site1 is replicated to site2 and vice-versa anyway according to your config.
@payl_chdhry this one works. thanks! any idea what rest url to get all the guids? I saw a majority of the guids but still missing a couple of there hostnames.  | rest /services/server/info | rest s... See more...
@payl_chdhry this one works. thanks! any idea what rest url to get all the guids? I saw a majority of the guids but still missing a couple of there hostnames.  | rest /services/server/info | rest splunk_server_group=* /services/cluster/master/peers | rest splunk_server=* /services/search/distributed/peers
Thank you! I've been trying to find logs from process startup but not sure where these might be located?  What do you mean by 'the other side of the connection'?  
  Hi , Please check above two screenshot , i want to join these queries in such way where i will get AppID along with coluns in first search query  requirement is appid should come against ... See more...
  Hi , Please check above two screenshot , i want to join these queries in such way where i will get AppID along with coluns in first search query  requirement is appid should come against order id from from first screen shot   pls suggest . . 
Yes, I'm currently working on Splunk. I want to pull the data from Event Viewer and save them to the cvs file and then I add data for splunk is this the right way I want the data to be understandab... See more...
Yes, I'm currently working on Splunk. I want to pull the data from Event Viewer and save them to the cvs file and then I add data for splunk is this the right way I want the data to be understandable like botsv
I'm trying to integrate with Azure DB.  Connection type - MS-SQL Server using jTDS driver Port - 1433
ok is the latency explain in seconds? Imagine the latency is 180 does it mean i have to put -3m@m in earliest and now() in latest?
You can calculate the latency like this | eval latency=_indextime - _time However, this is for the events already in the event pipeline. You could use this to find a maximum latency over a period a... See more...
You can calculate the latency like this | eval latency=_indextime - _time However, this is for the events already in the event pipeline. You could use this to find a maximum latency over a period and apply this statically to your earliest value in your next search. However, this is still only a static value and there is no guarantee that you won't have missed some events with higher latencies. You could periodically rerun the latency calculator to see if you are missing any events and adjust your search accordingly.
Assuming you have already extracted the data field, and that the string in data is valid JSON (which you example is not), you could try this | spath input=data | where 'response.action.type'="UserCr... See more...
Assuming you have already extracted the data field, and that the string in data is valid JSON (which you example is not), you could try this | spath input=data | where 'response.action.type'="UserCreated" OR 'response.action.type'="TxCreated" | eval id = coalesce('response.resources{}.id', 'response.actors.id')