All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to use the recently released 8.1.0 Universal Forwarder to send logs over HTTP: https://docs.splunk.com/Documentation/Forwarder/8.1.0/Forwarder/Configureforwardingwithoutputs.conf#Configur... See more...
I'm trying to use the recently released 8.1.0 Universal Forwarder to send logs over HTTP: https://docs.splunk.com/Documentation/Forwarder/8.1.0/Forwarder/Configureforwardingwithoutputs.conf#Configure_the_universal_forwarder_to_send_data_over_HTTP I have my outputs.conf configured as described in that configuration:   [httpout] httpEventCollectorToken = [my_hec_token] uri = http://[my_splunk_url]:8088 batchSize = 65536 batchTimeout = 5   I am also able to curl the HTTP Event Collector and successfully test the endpoint from the machine running the Universal Forwarder:   curl -k http://[my_splunk_url]:8088/services/collector/event -H "Authorization: Splunk [my_hec_token]" -d '{"event": "hello world"}' {"text":"Success","code":0}   However when I start the Universal Forwarder, it shows the following error in the splunkd.log:   10-20-2020 14:41:40.989 +0000 ERROR S2SOverHttpOutputProcessor - HTTP 404 Not Found 10-20-2020 14:41:50.103 +0000 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...   I have tried using https (although I know that the HEC endpoint in this case does not use https) and I have tried providing the /services/collector/event or /services/collector url paths in the config, but when doing any of these I instead get a 502 error in the log. How can I troubleshoot this?
Hi, In recent times, we have observed that there are more inactive sessions being seen in the Oracle database which is causing other batch jobs to fail.  Is there any way to remove the inactive... See more...
Hi, In recent times, we have observed that there are more inactive sessions being seen in the Oracle database which is causing other batch jobs to fail.  Is there any way to remove the inactive sessions in the database from AppDynamics or can it be done by changing any properties in the database agent configuration file? Any suggestions to solve this problem are highly appreciable. Thanks, Mukesh
So I was able to see my trial license after it was fixed in this thread https://community.appdynamics.com/t5/Licensing-including-Trial/not-able-to-access-free-trail/m-p/41415#M631  But now I am tr... See more...
So I was able to see my trial license after it was fixed in this thread https://community.appdynamics.com/t5/Licensing-including-Trial/not-able-to-access-free-trail/m-p/41415#M631  But now I am trying to launch the app and I'm getting a permissions error:   ^ edited by @Ryan.Paredez to remove an image that had sensitive information. Please do not share images or URLs to your Controller URLs on community posts for security and privacy reasons.
Hi everyone! My time picker token spits out values like "-60m@m" and I want to convert this time value into an epoch time so I can filter based on epoch time. How do I convert this? Can I use strpti... See more...
Hi everyone! My time picker token spits out values like "-60m@m" and I want to convert this time value into an epoch time so I can filter based on epoch time. How do I convert this? Can I use strptime() to do it? What format would I tell strptime() the time is in? Thank you!
I would like to pubblish an app on the Splunkbase and I would like to know how long it takes to get the approval after a request on the Splunk AppInspect. Thanks in advance!
Hello Splunkers, I have one requirement where have 3 sites and planning to keep the search factor to 2 and replication factor to 3. Current Config: ( SF=3 and RF=3 ) [clustering] site_replication... See more...
Hello Splunkers, I have one requirement where have 3 sites and planning to keep the search factor to 2 and replication factor to 3. Current Config: ( SF=3 and RF=3 ) [clustering] site_replication_factor = origin:1,site1:1,site2:1,site3:1,total:3 site_search_factor = origin:1,site1:1,site2:1,site3:1,total:3   To making into the Search Factor to 2 in any sites will the below settings works? [clustering] site_replication_factor = origin:1,site1:1,site2:1,site3:1,total:3 site_search_factor = origin:1,site1:1,site2:1,site3:1,total:2 To reduce to SF=2 what all steps involved? Cheers, Arun Sunny      
Hi Everyone, How can I write splunk search query to check if for particular variable value has increased in 4 hours. Thanks in advance
Hi Team, need help in getting few nodelabel highlighted.  "WANRT"  & "DCNDC". sitecode nodelabel PJS LANCUA001 PCW LANCUA001 PCW WANINF001 PCW WANRTC001 PCW DCNDCI001   ... See more...
Hi Team, need help in getting few nodelabel highlighted.  "WANRT"  & "DCNDC". sitecode nodelabel PJS LANCUA001 PCW LANCUA001 PCW WANINF001 PCW WANRTC001 PCW DCNDCI001   Below is the code that I used to get the "WANRT" devices highlighted with RED color, but i am unable to get the "DCNDC" devices too.      <format type="color" field="nodelabel"> <colorPalette type="expression">if (like(value,"%WANRT%"),"#FF5733","#FFFFFF")</colorPalette> </format>     please help me with the code to get both highlighted.
Hello Devs, unfortunately my splunk events getting cut sometimes and only the end part (varies from last 1k to 10k lines) of the message appears in splunk. This appears to happen only to large ev... See more...
Hello Devs, unfortunately my splunk events getting cut sometimes and only the end part (varies from last 1k to 10k lines) of the message appears in splunk. This appears to happen only to large events, events with 5000+ lines or 100kb+ one liner events. By sometimes i mean it occurs in about 30 out of 200k+ cases and if I reimport them by copying them again into a logfile they appear complete in splunk. The Logfiles coming via a log4j2 Async FileAppender and uses sourcetype log4j-a props.conf [log4j-a] TRUNCATE = 0 BREAK_ONLY_BEFORE = \d{4}.\d{2}.\d{2}\s+\d{2}:\d{2}:\d{2}\s+-\s+log= MAX_EVENTS=50000 I tried to use time_before_close and multiline_event_extra_waittime  in the inputs.conf but it got worse and other events which had not been incomplete before got corrupted. Any Ideas what might be the problem? Let me know if you need more information. Best regards, Dimitrios
Good afternoon We have had constant problems with this APP which every time it gave an error, the application was restarted and the data was re-indexed, in this case it is used for monitoring twitte... See more...
Good afternoon We have had constant problems with this APP which every time it gave an error, the application was restarted and the data was re-indexed, in this case it is used for monitoring twitter, the app key has been acquired but still continues presenting errors. Detail of inputs.conf and errors from _internal is attached [rest://twitters] activation_key = 3Cxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx auth_type = oauth1 endpoint = https://stream.twitter.com/1.1/statuses/filter.json http_method = GET index = twitter index_error_response_codes = 1 log_level = INFO oauth1_access_token = 1298721659038257158-F18lXXXXXXXXXXXXXXXXXXXXXX oauth1_access_token_secret = 3GhF21OVmif1mbVXXXXXXXXXXXXXXXXXXXXXX oauth1_client_key = QxIyiALfbXXXXXXXXXXXXXXXXXXXXXX oauth1_client_secret = PFDQMVpImp2XXXXXXXXXXXXXXXXXXXXXX request_timeout = 86400 response_type = text sequential_mode = 0 sourcetype = tweetsloc streaming_request = 1 url_args = track=VirginMobile_cl,fibra,3play,iptv,internet hogar ^stall_warnings=true ###### ERRORS ######## (item.split('=',1) for item in url_args_str.split(delimiter))) for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode): for line in r.iter_lines(): raise ChunkedEncodingError(e) self.__target(*self.__args, **self.__kwargs) self.run() url_args = dict((k.strip(), v.strip()) for k,v in File "/local1/splunk_HF2/etc/apps/rest_ta/bin/requests/models.py", line 754, in generate File "/local1/splunk_HF2/etc/apps/rest_ta/bin/requests/models.py", line 795, in iter_lines File "/local1/splunk_HF2/etc/apps/rest_ta/bin/rest.py", line 494, in <genexpr> File "/local1/splunk_HF2/etc/apps/rest_ta/bin/rest.py", line 495, in do_run File "/local1/splunk_HF2/etc/apps/rest_ta/bin/rest.py", line 687, in do_run File "/local1/splunk_HF2/lib/python2.7/threading.py", line 754, in run File "/local1/splunk_HF2/lib/python2.7/threading.py", line 801, in __bootstrap_inner ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read)) Exception in thread Thread-1: INFO:restmodinput:rest://twitters : Entered Polling Loop INFO:restmodinput:rest://twitters : Executing HTTP Request Traceback (most recent call last): ValueError: need more than 1 value to unpack Any support is appreciated. greeting
Hi, I am creating a dashboard like below, and want to check for duplicates in a particular column. table is how dashboard will look initially, and later if the file value will be "adcdefghi", I wan... See more...
Hi, I am creating a dashboard like below, and want to check for duplicates in a particular column. table is how dashboard will look initially, and later if the file value will be "adcdefghi", I want the status to be changed to data collected.  could anyone help me with this.  app file status one abcdefghi waiting for data two  jklmnopq waiting for data
I want to setup a 30 days report, but I want to receive this report on email on last day of the month,depending on the last day bet it 30th, 31st or 28th. But my requirement is to receive on last da... See more...
I want to setup a 30 days report, but I want to receive this report on email on last day of the month,depending on the last day bet it 30th, 31st or 28th. But my requirement is to receive on last day of month? How can I setup the schedule for the above report
Hi everybody, I hope someone can help me out. I appreciate any further comments.   I have two searches that I have to compare but they have different formats: First search: index="schedule" "uplo... See more...
Hi everybody, I hope someone can help me out. I appreciate any further comments.   I have two searches that I have to compare but they have different formats: First search: index="schedule" "uploaded to S3, number of rows:" | rex "File (?<table_name>.*?csv)" | eval table_name=substr(table_name, 0, len(table_name) - 28) | eval rows = replace(Success,"number of rows:","") | eval rows = substr(rows, 1, len(rows)-1) | eval trans_date=strftime(_time, "%m-%d-%y") | search trans_date=10-14-20 | stats sum(rows) as rows by warehouse, table_name, trans_date | table table_name, trans_date, rows   Second search: index=schedule TransactionPurgingSummary AND "\"newPurging\":true" | rex "TransactionPurgingSummary: (?<TransactionPurgingSummary>.*?})" | spath input=TransactionPurgingSummary | eval trans_date=strftime(_time, "%m-%d-%y") | search objectsPurgedSummary.transaction0 > 0 | stats sum(objectsPurgedSummary.transaction0) as b_transaction0, sum(objectsPurgedSummary.auxiliarymessage) as b_auxiliarymessage, sum(objectsPurgedSummary.transactionmessage) as b_transactionmessage, sum(objectsPurgedSummary.transactionmessagevalue) as b_transactionmessagevalue, sum(objectsPurgedSummary.trans_transactionmessages) as b_trans_transactionmessages, sum(objectsPurgedSummary.labelling_transaction_pfd_link_tb) as b_labelling_transaction_pfd_link_tb, sum(objectsPurgedSummary.labelling_transaction_alert_link_tb) as b_labelling_transaction_alert_link_tb by warehouse, trans_date | table trans_date, b_transaction0, b_auxiliarymessage, b_transactionmessage, b_transactionmessagevalue, b_trans_transactionmessages, b_labelling_transaction_pfd_link_tb, b_labelling_transaction_alert_link_tb   I am aim creating an alert that compares the first query table_name and rows to the second query the value in the correspondent table name by tarns_date eg:  transaction0 rows = b_transaction0          
Hi Everyone, I have one requirement.  Below is my drop-down code from which I am populating the OrgName : <label> Licenses Clone06</label> <fieldset submitButton="false" autoRun="true"> <input t... See more...
Hi Everyone, I have one requirement.  Below is my drop-down code from which I am populating the OrgName : <label> Licenses Clone06</label> <fieldset submitButton="false" autoRun="true"> <input type="multiselect" token="OrgName" searchWhenChanged="true"> <label>Org Name</label> <choice value="*">All Salesforce Org</choice> <search> <query>index="ABC" sourcetype="XYZ" | lookup Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | stats count by OrgName</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <fieldForLabel>OrgName</fieldForLabel> <fieldForValue>OrgName</fieldForValue> <delimiter> OR </delimiter> <initialValue>*</initialValue> <default>*</default> </input> The issue now I am facing is its not working when I am selecting multiple orgs from drop-down. Sometimes its displaying only the 1st value from the drop-down in dashboard or if I select 6 values its only displaying the data of 4 . Below is the code for the query: <query>index="ABC" sourcetype="XYZ" $type$ |lookup Org_Alias.csv OrgFolderName OUTPUT OrgName| search OrgName=$OrgName$|dedup OrgName, LicenseName, OrgId |stats sum(TotalLicenses) as "Total-Licenses" sum(UsedLicenses) as "Used Licenses" sum(UnusedLicenses) as "Unused Licenses" by LicenseName OrgName OrgId | sort -Total-Licenses</query> Can someone guide me what is wrong in the query.
In searchhead cluster with six machines, only one SH machine is not giving results for a particular app. We have checked right corner>help>about>server.   All 5 other SH's giving results for this ... See more...
In searchhead cluster with six machines, only one SH machine is not giving results for a particular app. We have checked right corner>help>about>server.   All 5 other SH's giving results for this dashboard, except one.   Could anyone suggest with some troubleshooting?   I have cross-checked app config, among SHM    Thanks.
I want to set up a user friendly data catalogue for a large Splunk deployment. As I'm a newbie i'd welcome suggestions.  
Hi,   In this dashboard ,i want the changing fields and coloumn. For example I want the " planned_AB_ECD1" Beside to  "Time spent in Hrs_AE_ECD1 ".pleas
Is there a way to mask/hash/encrypt the password with use of the Deployer for this TA? https://splunkbase.splunk.com/app/1852/
Hello,   the server only says "Server error" in search&reporting without showing "inspect job", how can I debug it?   Regards Arnim
I have the following props and confs which works fine and does what I need it to do.   PROPS     [mydata_logs] LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = true NO_BINARY_CHECK = true TIME_PREFIX... See more...
I have the following props and confs which works fine and does what I need it to do.   PROPS     [mydata_logs] LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = true NO_BINARY_CHECK = true TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 20 TRANSFORMS-set = setnull,keptevents       TRANSFORMS     [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [keptevents] REGEX = ^.+INFO: DEST_KEY = queue FORMAT = myindex       This leaves me with events in my log such as follows   2020-10-02 17:01:32,360 INFO: User.val (value, value2, value3, value4): User not found. Parameters: myid: 1; orig: userKO: userId: 1234567 2020-10-02 17:09:48,123 INFO: Helper.loadObjects(): Username does not exist. mystique 2020-10-02 18:01:48,546 INFO: CleanupProcess.executeHelper(): Running cleanup process for Silly 1.2.3.4000 ...   What I am trying to do from the left over logs is to remove the lines that are system events.  In the above example I want to remove the events that read 2020-10-02 17:09:48,123 INFO: Helper.loadObjects(): Username does not exist. mystique   2020-10-02 18:01:48,546 INFO: CleanupProcess.executeHelper(): Running cleanup process for Silly 1.2.3.4000 ...   This should leave me with the following event making it to my index 2020-10-02 17:01:32,360 INFO: User.val (value, value2, value3, value4): User not found. Parameters: myid: 1; orig: userKO: userId: 1234567   Through REGEX (and using the CleanupProcess.executeHelper as an example) - ^CleanupProcess.+ would target that line so I could regex it out but I need help on how I construct the props / transforms to do this.  This is the props and transforms adjusted (I tried changing order so that "keptevents" was directly after the first null (i.e. setnull) or at the end (ensuring that props reflected the order). UPDATED PROPS     [mydata_logs] LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = true NO_BINARY_CHECK = true TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 20 TRANSFORMS-set = setnull,keptevents,cleanupprocess_filter,helper_filter       UPDATED TRANSFORMS     [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [cleanupprocess_filter] REGEX = ^CleanupProcess.+ DEST_KEY = queue FORMAT = nullQueue [helper_filter] REGEX = ^Helper.+ DEST_KEY = queue FORMAT = nullQueue [keptevents] REGEX = ^.+INFO: DEST_KEY = queue FORMAT = myindex       The above is an example as there are more filters I need to apply as I work through my data set.  Unfortunately I have no way on the syslog instance to isolate these logs at the source.     Thanks in advance