All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,  I am trying to get the above addon working in our environment. Our environment comprises of 2 heavy forwarders and a deployment server, the heavy forwarders filter all data to Splunk Cloud. ... See more...
Hello,  I am trying to get the above addon working in our environment. Our environment comprises of 2 heavy forwarders and a deployment server, the heavy forwarders filter all data to Splunk Cloud.  When setting the above addon up I have confirmed that both Heavy forwarders can connect to our on-premise Jira server and both heavy forwarders pull down data from Jira e.g. projects etc.  We have the setup in passthrough mode with passthrough being enabled within Splunk cloud, I'm aware that Splunk cloud will connect to the heavy forwarders and pull information from the KVstore but this does not appear to be happening. The addon within Splunk cloud still try's to connect to Jira when an account is populated in the configuration. When removing the configuration it complains about needing an account. A bearer token has been created within Splunk cloud and both heavy forwarders have been populated with the bearer token.  Has anyone successfully set this up and if so do you have any pointers?   
Hi, on a brand new Splunk install, the app tries using urrlib2, but Splunk only has urllib3. There is an exception where a "," is used instead of an "as" (line 388 of splunk_rest_client.py). It tr... See more...
Hi, on a brand new Splunk install, the app tries using urrlib2, but Splunk only has urllib3. There is an exception where a "," is used instead of an "as" (line 388 of splunk_rest_client.py). It tries to use something from a module cStringIO which does not exist in Splunk or the app.    
Hi @ChaoticMike, there isn't a track of steps (I asked this on Splunk Ideas), so you can calculate only the global latency. Ciao. Giuseppe
Thanks Giuseppe.  Our problem is we aren't sure if our latency is in the forwarding chain, or within Splunk Cloud.  We can indeed determine the end-to-end latency, but we are trying to drill into eac... See more...
Thanks Giuseppe.  Our problem is we aren't sure if our latency is in the forwarding chain, or within Splunk Cloud.  We can indeed determine the end-to-end latency, but we are trying to drill into each hop.  Does anyone know of a way to do that?  It sounds... 'tricky'!  
Hello all,   The Splunk default admin name has been changed and now I get the below error on Splunk DB connect. Please can someone let me know which conf file holds this info so I can change it to ... See more...
Hello all,   The Splunk default admin name has been changed and now I get the below error on Splunk DB connect. Please can someone let me know which conf file holds this info so I can change it to the new username?     Splunkd error: HTTP 400 -- User with name=admin does not exist
Hi @aditsss, there's something wrong in this search because there's a square parenthesis close but not the open, could you share the correct search? Ciao. Giuseppe
You are correct.
Hello Everyone.. Please reply if you have any solution to add show more and show less function in splunk dashboard table column. lets say there is one table with 4 columns - C1, C2, C3, C4 and 5 ro... See more...
Hello Everyone.. Please reply if you have any solution to add show more and show less function in splunk dashboard table column. lets say there is one table with 4 columns - C1, C2, C3, C4 and 5 rows - R1, R2, R3, R4, R5. Consider Column C2 has 1 value in R1, 10 values in R2, 4 values in R3, 5 values in R4, 2 values in R5. I have to make 1 value to show as default and if there is value more than one then "show more" option should get enabled to expand the remaining details and "show less" option to collapse the expanded details. Thanks in Advance!
What is not correct about the StartTime and EndTime fields?  What do you expect them to be?
When you removed the blacklist setting do you also restart the forwarder(s)? Are there any transform or Ingest Actions in the data path that might also be discarding the events?
Please explain what you mean by "it doesn't fully work"?  How does it fall short? What exactly are you trying to do with the coalesce function? Rather than ask how to use specific commands, I sugge... See more...
Please explain what you mean by "it doesn't fully work"?  How does it fall short? What exactly are you trying to do with the coalesce function? Rather than ask how to use specific commands, I suggest you explain your inputs and desired outputs.  Then someone can recommend a query.
I figured out why this was throwing the error and posting here the solution just in case if it help someone. I was sure that I did not use any IP's while configuring the instances, however, I just... See more...
I figured out why this was throwing the error and posting here the solution just in case if it help someone. I was sure that I did not use any IP's while configuring the instances, however, I just noticed that when I used cluster manager URI in server.conf for searchhead mode, it picked the IP address of peers (default behavior I think) instead of fqdn. The cert SAN did not had IP address in it. To overcome this, I added the below line in server.conf in each cluster peer and it resolved the issue. [clustering] register_search_address = FQDN  
Hi, I am myself a sysadmin and  if you read my entire post with open eyes ,  i myself wrote that this information is available in /bash_history to check but that is manually after ssh into the serve... See more...
Hi, I am myself a sysadmin and  if you read my entire post with open eyes ,  i myself wrote that this information is available in /bash_history to check but that is manually after ssh into the server. If i wasn't aware how to check this, i wouldn't have mentioned about checking user's history. It doesn't matter which ever flavor of Linux you take be it Ubuntu or RHEL family anybody who is familiar with user deletion activity will know this issue because its same for any linux flavor. We are on RHEL 7.9 and under /var/log/secure  all we see is following type of messages when someone runs userdel command:  There is no further message or record in /var/log/secure of who ran this command. That's my use case and that is why i drew parallel with Windows Event Viewer logs to see how others are doing for similar use cases.   
Hi Team, Below is my query search index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=i... See more...
Hi Team, Below is my query search index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","")|head 7 | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True ] |rename busDt as Business_Date |rename fileName as File_Name |rename CARS.UNB_Duration as CARS.UNB_Duration(Minutes) |table Business_Date File_Name StartTime EndTime CARS.UNB_Duration(Minutes) Records totalClosingBal totalRecordsWritten totalRecords EBNCStatus |sort -Business_Date I am sorting on the basis of business date but my startTime and EndTime is not coming correct. Can someone guide me Below is the screenshot for the same  
How to change font size of texts inside bar charts, column, table using dashboard xml source?    i tried font-size: 15 but didn't work in xml source
Hi @ChaoticMike, in Splunk you have: _time that's the event timestamp, _indextime that's the time whe the event is indexed. so you could calculate a difference between these two fields: index=... See more...
Hi @ChaoticMike, in Splunk you have: _time that's the event timestamp, _indextime that's the time whe the event is indexed. so you could calculate a difference between these two fields: index=* | eval diff=_indextime-_time | stats avg(diff) AS diff_avg max(diff) AS diff_max min(diff) AS diff_min BY index Ciao. Giuseppe
Hello, For solid reasons that I can't go into here, we have a topology of... AWS CloudWatch-> Kinesis Firehose -> AWS Delivery Stream Object ->AWS Lambda ->HEC listener on a Heavy Forwarder ->  Th... See more...
Hello, For solid reasons that I can't go into here, we have a topology of... AWS CloudWatch-> Kinesis Firehose -> AWS Delivery Stream Object ->AWS Lambda ->HEC listener on a Heavy Forwarder ->  That Heavy Forwarder -> Another Heavy Forwarder -> Splunk Cloud.  I'm pretty sure that (apart from having 1 HF forward to a second before hitting Splunk Cloud), that is the reference architecture for CloudWatch events. There is no Splunk indexing going on in our infrastructure.  We are just forwarding loads of information to Splunk Cloud for indexing and analysis there. We can establish latency through most of that chain, but we are interesting in determining the latency from when our events land in Splunk Cloud, to those events being visible for analysis.  Is there a handy metric or query we can re-use? Thanks in advance...
Try something like this [search ] earliest=-4w | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | where current_day == log_day | eval hour=strftime(_time, "%H") |... See more...
Try something like this [search ] earliest=-4w | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | where current_day == log_day | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%d") | stats count by hour day HTTP_STATUS_CODE | chart avg(count) as average by hour HTTP_STATUS_CODE
Hi @man03359, what do you mean with "src_ips that have mismatched src name and device name."? Maybe src_ips that have different src_name or different device_name? if this is your requirement, plea... See more...
Hi @man03359, what do you mean with "src_ips that have mismatched src name and device name."? Maybe src_ips that have different src_name or different device_name? if this is your requirement, please try this: index="idx-network-firewall" (sourcetype="fgt_traffic" OR sourcetype="fortigate_traffic") | lookup Stores_Inventory src_ip OUTPUT Device | stats latest(_time) AS latest values(srcname) as srcname latest(app) as app dc(srcname) AS srcname_count dc(Device) AS Device_count BY src_ip | where srcname_count>1 OR Device_count>1 | table src_ip Device src app In this way you'll list all the src_ips with more than one name or device. Ciao. Giuseppe
Hi All, Below is my search query - index="idx-network-firewall" (sourcetype="fgt_traffic" OR sourcetype="fortigate_traffic") | stats latest(_time) values(srcname) as src latest(app) as app by sr... See more...
Hi All, Below is my search query - index="idx-network-firewall" (sourcetype="fgt_traffic" OR sourcetype="fortigate_traffic") | stats latest(_time) values(srcname) as src latest(app) as app by src_ip | lookup Stores_Inventory src_ip OUTPUT Device | table src_ip Device src app  I have 3 fields src_ip, src and device. I am getting the field values for src from the first 2 lines of the query - index="idx-network-firewall" (sourcetype="fgt_traffic" OR sourcetype="fortigate_traffic") | stats latest(_time) values(srcname) as src latest(app) as app by src_ip  I am trying to build a search query that finds src_ips that have mismatched src name and device name.   Thanks in advance.