All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How can I retrieve the file name was uploaded/shared in any collaboration tool excluding the ones generating by the app? And how to search when someone join a meeting for any collaboration tools?
Hi All..  As you may be aware of Splunk's Security Content.. for example, for linux user creation https://research.splunk.com/endpoint/51fbcaf2-6259-11ec-b0f3-acde48001122/ on this, there are 2 ma... See more...
Hi All..  As you may be aware of Splunk's Security Content.. for example, for linux user creation https://research.splunk.com/endpoint/51fbcaf2-6259-11ec-b0f3-acde48001122/ on this, there are 2 macros they use.. one macro is: https://github.com/splunk/security_content/blob/develop/macros/security_content_summariesonly.yml actually how to implement/create this macro please.   
Hello Splunkers, I would like to change the value of frozenTimePeriodInSecs for one of my existing indexes. What should I be careful of ? Juste change the value on my Master Node and push the new... See more...
Hello Splunkers, I would like to change the value of frozenTimePeriodInSecs for one of my existing indexes. What should I be careful of ? Juste change the value on my Master Node and push the new bundle to my Indexers ?  Also since my frozenTimePeriodInSecs will be lower than some of my bucket indexes, those events will be automatically rolled to frozen ?  Thanks a lot ! GaetanVP
Hi, I am trying to trim everything before the "211 Withdrawal amount exceeded: from the output -- WITHDRAWAL_AMOUNT_EXCEEDED; Refusal Reason: 211 Withdrawal amount exceeded. But in my logs, at som... See more...
Hi, I am trying to trim everything before the "211 Withdrawal amount exceeded: from the output -- WITHDRAWAL_AMOUNT_EXCEEDED; Refusal Reason: 211 Withdrawal amount exceeded. But in my logs, at some events Refusal Reason is blank, so in that case I need to trim off the first part. e.g., Validation failed: Total amount is lower than configured min amount. ; Refusal Reason  MAINTENANCE; Refusal Reason:   please help
Has anyone had this issue after upgrading to v9.1 from v 9.04, The top toolbar shows Loading....  but never loads? The issue happened on our Search Head / Indexer & to the Deployment server. I ha... See more...
Has anyone had this issue after upgrading to v9.1 from v 9.04, The top toolbar shows Loading....  but never loads? The issue happened on our Search Head / Indexer & to the Deployment server. I have also installed a full Splunk instance on to a virgin server & this has the same issue. Edit - Ticket logged - CASE [3258488]  
Hello Team, I would like to schedule a report which needs to be running on the first business day of the month. Could you please let me know if this is possible.
What is meant by deployment server and what does it do?
Hi, I need help with parsing below data that is pulled from a python script. The data is pushed to system output and script monitoring is in place to read the data.  Below sample Json format data i... See more...
Hi, I need help with parsing below data that is pulled from a python script. The data is pushed to system output and script monitoring is in place to read the data.  Below sample Json format data is printed to system output.  And below is the props currently present. The data has to be divided into multiple events after "tags."   [sourcetype_name] KV_MODE = json SHOULD_LINEMERGE = false disabled = false CHARSET = UTF-8 TRUNCATE = 0 LINE_BREAKER = \"tags\":\[[\w\W].*\}\,\{\"id\": SEDCMD-remove_trailing_comma = s/\},\s/}/g   Sample data. [{'id': 'e3b12550-db91-4a99-b20e-3e943fad2c7d', 'name': 'name', 'idn_name': '', 'idn': {}, 'tld': 'com', 'management_status': 'auto_renew_enabled', 'management_type': 'transfer', 'registered_at': '2015-09-15T00:00:00Z', 'expires_at': '2024-09-15T00:00:00Z', 'updated_at': '2022-01-06T13:58:54Z', 'created_at': '2021-12-03T11:40:56Z', 'managed_at': '2021-12-21T07:55:40Z', 'premium': 'Unknown', 'key_domain': False, 'whois_privacy': False, 'hidden_owner': False, 'local_presence': False, 'registry_lock': {'enabled': False}, 'nameservers': {'names': ['domain_name_service1.xyz3.abc.net', 'domain_name_service2.xyz3.abc.net', 'domain_name_service3.xyz3.abc.net', 'domain_name_service4.xyz3.abc.net', 'xyz1.abc.com', 'xyz2.abc.com', 'xyz3.abc.com', 'xyz4.abc.com'], 'labels': ['self_managed']}, 'signing_keys': [], 'registrant': {'id': 'a310c999-1b71-4a83-a6e3-f12af66a1001', 'name': 'Domain Administrator', 'email': 'abc@abc.com', 'phone': '+1.number', 'mobile': '', 'fax': '', 'organisation': 'company name', 'street1': 'address', 'street2': '', 'street3': '', 'city': 'city_details', 'state': 'state', 'postcode': 'pin_code', 'country_code': 'US'}, 'administrative': {'id': '2571a69a-07d2-442f-9142-5a418e4c0373', 'name': 'Domain Administrator', 'email': 'email', 'phone': '+1.number', 'mobile': '', 'fax': '', 'organisation': 'city_details Chocolate & Confectionery LLC', 'street1': '19 East Chocolate Avenue', 'street2': '', 'street3': '', 'city': 'city_details', 'state': 'state', 'postcode': 'pin_code', 'country_code': 'US'}, 'technical': {'id': '311791fa-bf28-40d9-a348-d503b4fc4380', 'name': 'Technical Manager', 'email': 'webops@city_detailss.com', 'phone': '+1.number', 'mobile': '', 'fax': '', 'organisation': 'company name', 'street1': 'address '', 'street3': '', 'city': 'city_details', 'state': 'state', 'postcode': 'pin_code', 'country_code': 'US'}, 'account': {'id': 'd126c591-3ec0-4f63-afd1-5bbd923504c5', 'name': 'city_details Co (Primary)', 'contracting_company': 'consonum', 'parent': {'id': '283410a6-2785-466c-a880-51aa17c8b8b2', 'name': 'city_details Co'}}, 'active_zone': None, 'domain_name_servicesec': False, 'external_comments': '', 'tags': []}, {'id': '6a735a33-a942-4f42-9a66-2bbda1466855', 'name': 'name', 'idn_name': '', 'idn': {}, 'tld': 'com', 'management_status': 'auto_renew_enabled', 'management_type': 'transfer', 'registered_at': '2015-09-15T00:00:00Z', 'expires_at': '2024-09-15T00:00:00Z', 'updated_at': '2022-01-06T13:58:54Z', 'created_at': '2021-12-03T11:40:56Z', 'managed_at': '2021-12-21T10:57:38Z', 'premium': 'Unknown', 'key_domain': False, 'whois_privacy': False, 'hidden_owner': False, 'local_presence': False, 'registry_lock': {'enabled': False}, 'nameservers': {'names': ['domain_name_service1.xyz3.abc.net', 'domain_name_service2.xyz3.abc.net', 'domain_name_service3.xyz3.abc.net', 'domain_name_service4.xyz3.abc.net', 'xyz1.abc.com', 'xyz2.abc.com', 'xyz3.abc.com', .... this continues till the end of the log file}]   The highlighted portion is where next ID starts. that shud be next event, but in Splunk everything is coming as a single event. Please help. Thanks in advance.
Hi, Looking for splunk query to use field value of 1st search in join search query to filter event of search query inside join. Query: index=lsc_db2_qa_index sourcetype=lsc_db2_ewm_qa_outbound | ... See more...
Hi, Looking for splunk query to use field value of 1st search in join search query to filter event of search query inside join. Query: index=lsc_db2_qa_index sourcetype=lsc_db2_ewm_qa_outbound | dedup EDIDCDOCNUM | rex field=_raw "(?<dateTime>[\d\-\s:]+).\d{3}, TIME.*" | rename EDIDCDOCNUM as ewmIdoc EDIDCSTATUS as ewmIdocStatus MESTYP as ewmmesType dateTime as ewmCreateTime | table ewmIdoc ewmIdocStatus ewmmesType ewmCreateTime | join type=outer ewmIdoc [search index=webmethods_qa5555_index sourcetype=transactions_qa5555_src | search sender="AMAT_SAP_EWM" AND receiver="EXACTA" | rex field=_raw "(?<wmDateTime>[\d\-:\s]+) .*" | rex field=messageId "(?<docNum>\d+)\|\|(?<whoNum>.*)" | rex field=messageId "(?<docNum>\d+)" | eval wmcreateDateTime= if( like( message, "%request from EWM%" ), wmDateTime,"") | eval wmconfirmDateTime=if( like( message, "%request sent to Exacta successfully%" ), wmDateTime,"") | eval wmsentDateTime=if( like( message, "%ready to send to Exacta%" ), wmDateTime,"") | lookup wminterface_mapping.csv wmInterface as interface OUTPUT Interface | stats values(Interface) as Interface values(whoNum) as whoNum values(wmcreateDateTime) AS wmcreateDateTime values(wmconfirmDateTime) AS wmconfirmDateTime values(wmsentDateTime) AS wmsentDateTime by docNum | rename docNum as ewmIdoc] | eval ewmIdoc=ltrim(tostring(ewmIdoc),"0") | fields ewmIdoc ewmIdocStatus ewmmesType ewmCreateTime whoNum,Interface,wmcreateDateTime,wmconfirmDateTime,wmsentDateTime | join type=outer whoNum [search index=lsc_exacta_qa_index source="D:\\ProgramData\\Bastian Software\\Logs\\ExactaImportAdapter\\ExactaImportAdapter*" | rex field=_raw ".* ORDER_NAME=\"(?<imaWho>[\d-]+)\" .*" | rex field=_raw ".*JSON received for product import:.*\"product\":\"(?<imaWho>[\d-]+)\",.*" | rex field=_raw ".*JSON received for putaway import:.*\"who\":\"(?<imaWho>[\d-]+)\",.*" | eval exactaRecTime = strftime(_time,"%Y-%m-%d %H:%M:%S") | dedup imaWho sortby +exactaRecTime | eval exactaInfStatus = if(exactaRecTime != "","Success",NA) | table imaWho exactaRecTime exactaInfStatus | join type=outer imaWho [search index=lsc_exacta_qa_index source="D:\\ProgramData\\Bastian Software\\Logs\\ExactaImport\\ExactaImport.txt" | rex field=_raw ".* Order \[(?<imWho>[\d-]+) - .*\] successfully assigned.*" | rex field=_raw "\.* Bastian\.Exacta\.Interface\.Processes\.ExactaProductTranslatorBase - Validation of Message Successfull, Prepare to Insert\n.*ROWS ONLY;\@p0 = \'(?<imWho>[\d-]+)\'.*\[.*" | rex field=_raw ".*\/line id \[(?<imWho>[\d-]+) -.* was cancelled successfully.\n.*" | rex field=_raw ".*\[Import Pick Orders\].*ROWS ONLY;@p0 = \'(?<imWho>[\d-]+)\' \[[\S\s]*- Messages processed successfully.*" | eval exactaDocTime = strftime(_time, "%Y-%m-%d %H:%M:%S") | search imWho !="" | eval exactaDocStatus = if(exactaDocTime != "","Created",NA) | table imWho exactaDocTime exactaDocStatus | rename imWho as imaWho] | table imaWho exactaRecTime exactaDocTime exactaInfStatus exactaDocStatus | rename imaWho as whoNum] | search Interface = "*" | rename whoNum as "WHO/PRODUCT" | table ewmIdoc ewmIdocStatus ewmmesType ewmCreateTime "WHO/PRODUCT",Interface,wmcreateDateTime, wmsentDateTime, wmconfirmDateTime, exactaRecTime exactaDocTime exactaInfStatus exactaDocStatus     OUTPUT:  looking  to execute  above red highlighted search query on events whose "_time" field value is equal to or greater than field value "wmsentDateTime" which we got from search query highlighted in green.   Thanks Abhineet Kumar
Hi Guys, We use enterprise security and we have configured asset and identity list.   From the global option "Asset and Identity Management" we have enabled merge asset and identities for all sou... See more...
Hi Guys, We use enterprise security and we have configured asset and identity list.   From the global option "Asset and Identity Management" we have enabled merge asset and identities for all sourcetypes. I can see the fields names (email,first,nick etc....) for most of indexers. However for some reason I am unable to view my identity list fields names being populated when I search on a index I created 3 weeks ago.  Can someone help me how to troubleshoot this ? 
I am using redis-extension for redis monitoring but here individual nodes are monitored only. Is there any extension available where we can get redis-cluster monitoring as well? idea is to execute co... See more...
I am using redis-extension for redis monitoring but here individual nodes are monitored only. Is there any extension available where we can get redis-cluster monitoring as well? idea is to execute command like "cluster info". ? Also is there any dashboards available for redis-monitoring?
Hello How can i get the information shown in settings->indexes in a table ? I know there is a rest command but i can't find it What i need is the index name and current size   Thanks
Hi All,  When I tried to add the new license, I am getting an error , "Bad Request — xxxx: failed to parse license because: Creation does not precede expiration; license can never be valid "  W... See more...
Hi All,  When I tried to add the new license, I am getting an error , "Bad Request — xxxx: failed to parse license because: Creation does not precede expiration; license can never be valid "  What might cause this error and how to resolve it? 
The Logs I am tring to onboard in Splunk have the following time format,  "YY.MM.DD HH:MM:SS" so I made a props.conf accordingly:  [sourcetype name] DATETIME_CONFIG = TIME_FORMAT = %y.%m.%d %H:%M:... See more...
The Logs I am tring to onboard in Splunk have the following time format,  "YY.MM.DD HH:MM:SS" so I made a props.conf accordingly:  [sourcetype name] DATETIME_CONFIG = TIME_FORMAT = %y.%m.%d %H:%M:%S TIME_PREFIX = ^ BREAK_ONLY_BEFORE_DATE = true MAX_TIMESTAMP_LOOKAHEAD = 20 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false this config was the one created by splunk when I parsed the Logs as an Upload, there were the dates properl readen. When I deploy this in production the sourctype name exists but does nothing to parse it.   Also, the year, month and day are read 2 times meaning that the hour in the logs is the date. For example, today all the logs are writen in the 23:07:03 hour. I also tried to chande datime_config to current but is also not working. Nothnig that I change in props file is taken into acount in prod. For gaining further commpreension of this case the Logs are Created in the DMZ and sent to the index throug a Deployer in the DMZ. Where could be the problem? Why is it props.conf working on the logs when uploaded and not when sent through a UFW?
Hi, We have permitted some firewall rules from Public DMZ to internal services including our DNS servers. While, we are in a process to host a DNS server dedicated for DMZ devices, we need to monito... See more...
Hi, We have permitted some firewall rules from Public DMZ to internal services including our DNS servers. While, we are in a process to host a DNS server dedicated for DMZ devices, we need to monitor any suspicious activity from any device directed towards the internal DNS server. What is the best way to achieve this? We looked for rules in Splunk Security Essentials but they didn't really help. 
Hi Guys, Need your help in choosing a algorithm which helps in predicting future 1 month data of cpu and memory for esx_cluster whose value is dependent on host and vm metrics ( cpu and memory ) I... See more...
Hi Guys, Need your help in choosing a algorithm which helps in predicting future 1 month data of cpu and memory for esx_cluster whose value is dependent on host and vm metrics ( cpu and memory ) I have tried Kalman filter and LPP5 algorithm by feeding one year data and got some difference between actual value and predicted value PFB query I have used my_query(with timechart at end)| predict "esx_cluster_name" as prediction algorithm=LLP5 holdback=2 future_timespan=32 upper95=upper95 lower95=lower95 period=30| `forecastviz(32, 2, "esx_cluster_name", 95)` By using above query I got good R square statics around 0.95 and RMSE around 0.7  Please help me with either choosing a different algorithm which takes the dependencies mentioned above  or by suggesting me how to improve my current query to get better prediction's so that difference between actual value and predicted value reduces Thanks in advance, K.Rohit Happy Splunking 
Hello guys,   I have [WinEventLog://Security] inputs.conf  setup. However , I would like to see the machine/server type and OS type in the index which is currently not there. How can I bring this... See more...
Hello guys,   I have [WinEventLog://Security] inputs.conf  setup. However , I would like to see the machine/server type and OS type in the index which is currently not there. How can I bring this data in. I have checked the raw data and no such infomation. Your thoughts?  Thanks in advance.   Cheers..   Regards
Hello, I'm looking for a splunk query to capture AD groups that are not integrated with SAML in Splunk Cloud
Hi All, I am trying to get data into my Splunk Cloud trial account using HTTP event collector. After configuring the required steps for setting up HEC, I am executing the following command: curl -k... See more...
Hi All, I am trying to get data into my Splunk Cloud trial account using HTTP event collector. After configuring the required steps for setting up HEC, I am executing the following command: curl -k https://prd-<instance>.splunkcloud.com:8088/services/collector/event -H "Authorization: Splunk <token>" However, I am getting the following error: {"text": "The requested URL was not found on this server.","code":404} Am I using the correct URL for this, or is this related to something in the configuration?  The tokens are enabled, and when I am checking the health of the instance, it returns this - {"text": "HEC is healthy","code":17}.
I have noticed that when search for the list of groups a specific user is memberOf the search is not returning the first group a user is memberOf in the AD.   In the screenshot below, you notice th... See more...
I have noticed that when search for the list of groups a specific user is memberOf the search is not returning the first group a user is memberOf in the AD.   In the screenshot below, you notice the group "Domain Users" is not in the list of membersOf although this group is default for all domain user. The group show up in the list when querying the user in AD.