All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, I'm having some real trouble even getting my Splunk forwarder to install onto my CentOS 8.2 on a RHEL 64 bit VM. I'm using rpm and following along with the install guide on Splunk's w... See more...
Hello everyone, I'm having some real trouble even getting my Splunk forwarder to install onto my CentOS 8.2 on a RHEL 64 bit VM. I'm using rpm and following along with the install guide on Splunk's website, but for the life of me I can't get the file to install. I used wget to install my free version of splunkforwarder, and that completed successfully. The file was downloaded to my /opt/ directory, and at first the permissions were read write only, so I changed those to executable, and still rpm won't recognize the installed software as an rpm package. I have also set SELinux to permissive with no success.  I just thought I would ask in case anyone is seeing something I haven't seen or has run into this before.   
I have been tasked with writing Queries for the following and I am not sure how to go about it: Detection / Event Name Event Description Master Password Use The master password used t... See more...
I have been tasked with writing Queries for the following and I am not sure how to go about it: Detection / Event Name Event Description Master Password Use The master password used to access the backend vault was used Backend Vault Built in Admin Use The built-in admin account on the backend vault was used Sssd.conf modified on linux server The sssd.conf file was modified on a linux server
I have been tasked with writing Queries for the following and I am not sure how to go about it: Detection / Event Name Event Description Master Password Use The master password used t... See more...
I have been tasked with writing Queries for the following and I am not sure how to go about it: Detection / Event Name Event Description Master Password Use The master password used to access the backend vault was used Backend Vault Built in Admin Use The built-in admin account on the backend vault was used Sssd.conf modified on linux server The sssd.conf file was modified on a linux server
Currently we have a separate panel that shows some information for our panels, this is essentially a title panel with some descriptive text. This is creating multiple panels that add a lot of excess ... See more...
Currently we have a separate panel that shows some information for our panels, this is essentially a title panel with some descriptive text. This is creating multiple panels that add a lot of excess items to our dashboards and XML. I was looking to implement these three values into the title area of the panel. I saw some items that show to put it as    <html> <h2> Title text </h2> <h2> description </h2> </html>   Doing this however means that my title and description are below my input dropdown which is not ideal. I would like these to be higher up in the title section but, I cannot add multiple things to the title section. Is there a way to position these above or add multiple items in the title option?
While attempting to clone (and mask) events that belong to select source patterns,. the CLONE_SOURCETYPE doesn't honor the REGEX. The goal is to restrict cloning to those events that have dev or tst ... See more...
While attempting to clone (and mask) events that belong to select source patterns,. the CLONE_SOURCETYPE doesn't honor the REGEX. The goal is to restrict cloning to those events that have dev or tst in their source.  So prod or perf or uat etc wouldn't get cloned.  it seems that the no matter what the REGEX in the clone stanza in transforms, the events gets cloned.  The temporary solution was to run a nullQueue for those non-dev and non-tst sources. What am I doing wrong here?  Any thoughts/suggestions? Note -The test file doesn't have any source defined. The only place I supply a source is using the rename-source argument as below   # Code fragment  How I run this using oneshot - splunk add oneshot test-foo.txt -rename-source "sfdc_object://User_splunk_dev_cnf" -index mask  -sourcetype sfdc:orig -host dev_01 [WORKS- clones should be created. Works as expected] splunk add oneshot test-foo.txt -rename-source "sfdc_object://User_splunk_prod_cnf" -index mask  -sourcetype sfdc:orig -host dev_02  [DOESN'T WORK - clones shouldn't be created, but they are] props..conf [sfdc:orig] TRANSFORMS-sfdc-orig = sfdc_cloner [sfdc:clone] EVAL-mn = "foo" transforms.conf # sources are one of the following -  sfdc_object://User_splunk_dev_cnf sfdc_object://User_splunk_tst_cnf      sfdc_object://User_splunk_prod_cnf ... [sfdc_cloner] #Only clone those where sources don't have _prod_  REGEX = ^(?=.*(dev|tst)).* # Tried this as well - no bueno #REGEX = (sfdc_object:.*(dev|tst)_cnf.*) SOURCE_KEY = MetaData:Source FORMAT = $0 DEST_KEY = _raw CLONE_SOURCETYPE = sfdc:clone
Hello I have similar situation where I have 2 sources of data and in data I get filenames processed but filenaming convention is different in both data sources. So for that matter I get a pattern u... See more...
Hello I have similar situation where I have 2 sources of data and in data I get filenames processed but filenaming convention is different in both data sources. So for that matter I get a pattern using eval and some string manipulations to match in both sources. So I am trying hard to find filenames that are in source1 but not in source2. So here I am trying to do: index="clouddata" Application=CS Message.PublisherId="PROD_*ONGOING*"| rename Message.FileName as cs_filename | dedup cs_filename | eval ercode = mvindex(split(cs_filename,"_"),1) | eval servicedatetime = mvindex(split(cs_filename,"_"),2) | strcat ercode servicedatetime fileSearchStr | eval fileSearch = substr(fileSearchStr,0,18) | table fileSearch | where NOT fileSearch IN [search index="serverdata" Application=SP | rename Message.FileName as sp_filename | dedup sp_filename | eval ercode = mvindex(split(sp_filename,"_"),0) | eval datetime = mvindex(split(sp_filename,"_"),1) | strcat ercode datetime fileSearchStr1 | eval fileSearch="\"".fileSearch."\"" | stats values(fileSearch) as search delim="," | nomv search] That field fileSearch would look like "10010JYR2011240547" And when I run subsearch as a separate main query it gives me something like "10005ABC2020112405","10010JYR2011240547","100839TIN202011240","83101ICC2020112406" Getting an error: Error in 'where' command: The expression is malformed. Expected (. Can I get some help on this?
Guys. I have the following log that I need to index in Splunk, breaking each line, what would be the best sourcetype for this log format?   TIME=20201031064817502 started|src=NSS|UCPU=0|SCPU=0 TIM... See more...
Guys. I have the following log that I need to index in Splunk, breaking each line, what would be the best sourcetype for this log format?   TIME=20201031064817502 started|src=NSS|UCPU=0|SCPU=0 TIME=20201031064817506||LUSED=1|LMAX=138|OMAX=-1|LFEAT=osr_swirec,dtmf,osr_rec_tier4|UCPU=125|SCPU=31 TIME=20201031064854505 EVNT=SWIepst|VERSION=11.0.3.2019061409|UCPU=5703|SCPU=250     The format is year, month, day, hours, minutes, seconds, mseconds 
Good afternoon Currently I have a dashboard that shows the events in a table and then they are exported to a .csv file Can I add the date and time in the file name? I attach the example.   <html... See more...
Good afternoon Currently I have a dashboard that shows the events in a table and then they are exported to a .csv file Can I add the date and time in the file name? I attach the example.   <html> <a class="btn btn-primary" role="button" href="/api/search/jobs/$export_sid_task$/results?isDownload=true&amp;timeFormat=%25FT%25T.%25Q%25%3Az&amp;maxLines=0&amp;count=0&amp;filename=SDESK-all_tasks<date>_<time>.csv&amp;outputMode=csv">Download CSV</a> </html>   Greetings.
On the home page of AppDynamics, we have Server tab which lists all of our servers.  I would like to extract server resource utilization details such as CPU, Memory consumed. I checked on Metric Bro... See more...
On the home page of AppDynamics, we have Server tab which lists all of our servers.  I would like to extract server resource utilization details such as CPU, Memory consumed. I checked on Metric Browser with following path: Application Infrastructure Performance|Tier_Name|Hardware Resources|Memory|Used % In this path I do not see any values at all. But on the home page, server tab, I see all of our servers listed along with metrics that I'm looking for.... is there an API that I can refer to pick these server resources consumption metrics?
Hello all. I selected the Location as Splunk Enterprise, though I am using Splunk Free; which I believe is based on Splunk Enterprise. Using Splunk Free Version and I am unable to install an app fr... See more...
Hello all. I selected the Location as Splunk Enterprise, though I am using Splunk Free; which I believe is based on Splunk Enterprise. Using Splunk Free Version and I am unable to install an app from Splunkbase. I did not see any info stating I couldn't with Splunk Free. If am wrong, that solves my issue. If not, would someone kindly help. Here is the app I am trying to install. https://splunkbase.splunk.com/app/936/#/details This is the error when I try to install the app. There was an error processing the upload.Invalid app contents: archive contains more than one immediate subdirectory: and haversine I am using WinZip to extract the TGZ file. Here is the file structure. haversine - appserver - bin - default - metadata -- README.txt That's it. Thanks in advance for your help. Safe and healthy to you and yours; and Happy Thanksgiving. God bless, Genesius
I am using Splunk Add-on for Unix and Linux 8.2.0 and enabled metrics index to collect disk usage. I can search the disk used percentage by below search but it is the average of all mount point. | ... See more...
I am using Splunk Add-on for Unix and Linux 8.2.0 and enabled metrics index to collect disk usage. I can search the disk used percentage by below search but it is the average of all mount point. | mstats avg(_value) where index=linux-os AND metric_name=df_metric.UsePct If I only want to stats metrics for a specific mount point, it seems there is no way to do it with mstats command. Is there any other approach to do it by utlizing the metrics index fast performance?   By searching the raw data for the metrics index,  | msearch index=linux-os | search sourcetype=df_metric  the search result is like below which shows data was ingested in _json format and metrics are created in a separate metrics index. However, in metrics index, there is no way to differentiate by MountedOn field as it's not a "metrics". { [-] Filesystem: /dev/vda1 IP_address: 10.1.2.3 MountedOn: / OS_name: Linux Server OS_version: 3 Type: ext4 entity_type: TA_Nix environment: dev metric_name:df_metric.Avail_KB: 11035324 metric_name:df_metric.Size_KB: 20509408 metric_name:df_metric.UsePct: 44 metric_name:df_metric.Used_KB: 8429164 }   Any thoughts or solution?
So we log API calls and response errors. However I'm having issues searching for the corrilating event from a log.    Example: Im searching for event related to and endpoint https://api.lyft.com/v... See more...
So we log API calls and response errors. However I'm having issues searching for the corrilating event from a log.    Example: Im searching for event related to and endpoint https://api.lyft.com/v1/paydisputes/oi/orders? and I'm about to get the log back of what was sent.  Nov 24 16:20:17 ip-11-222-111-100 order-service: [2020-11-24 16:20:17.399] [INFO ] [417863580]: [reactor-http-server-epoll-8] c.v.o.o.m.s.HostedApiRestClient [] - Sending Request - Method: 'GET', Path: 'https://api.test.com/v1/paydisputes/oi/orders?arn=1363473846864745645&authorizationCode=039319&settlementAmount=12.17&authorizationAmount=12.17&settlementDate=2020-11-16T00:00:00Z&authorizationDate=2020-11-16T00:00:00Z&settlementCurrency=USD&authorizationCurrency=USD&creditCardBin=473702&creditCard4=1788&orderId=14738858272569688&transactionId=234235423523543&sellerId=42201', Headers: '{x-test-api-version=[1.4], Accept=[application/json], Authorization=[Bearer ******], Content-Type=[application/json]}' * host = ip-11-222-111-100 * index = syslog * source = /var/log/syslog * sourcetype = syslog When I go to "Show Source" there is a associated event : Nov 24 16:20:17 ip-11-222-111-100 order-service: [2020-11-24 16:20:17.512] [ERROR] [417863693]: [reactor-http-client-epoll-11] c.v.o.o.OrderGlobalWebExceptionHandler [] - Error occurred while processing web requestorg.springframework.web.server.ResponseStatusException: 404 NOT_FOUND "Failed to receive order from merchant hosted api with statusCode=404, Not Found"#012#011at com.verifi.orderservice.order.merchanthosted.service.HostedApiRestClient.handleResponse(HostedApiRestClient.java:68)#012#011at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:153)#012#011at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:67)#012#011at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114)#012#011at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:192)#012#011at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:192)#012#011at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:76)#012#011at ..... Nov 24 16:20:17 ip-11-222-111-100 #011org.springframework.web.server.ResponseStatusException: 404 NOT_FOUND "Failed to receive order from merchant hosted api with statusCode=404, Not Found"   I can't seem to make a search that would allow me to pull any errors associated with the GET call based on the next event that shows an error code. I think it's because they are not correlated in any way other than time possibly?
So I've been able to successfully configure a dashboard to utilize post process searching to populate a table of email headers.  Once the user clicks on a specific row in the table, a UUID field is t... See more...
So I've been able to successfully configure a dashboard to utilize post process searching to populate a table of email headers.  Once the user clicks on a specific row in the table, a UUID field is then passed onto a bar chart in the lower half of the dashboard.  It appears that both post process searches are working, however the bar chart at the bottom ends up showing "No results found".  However, when I click on the "Open in Search" for that bar chart, the correct Search query shows up including contextual UUID and there's data in the Search results.  On top of that, if I click on the Visualizations tab, I see the bar chart that I'm looking for.  Is there some sort of refresh of the bar chart that I'm missing on the table click?  Is there some other reason why that bar chart won't populate?       <dashboard> <label>Mail Flow Header Analysis</label> <!-- Global Search for Mail Flow Header --> <search id="allHeaders"> <query>host=pgnet326* sourcetype="mailflow-3"</query> <earliest>1579766400</earliest> <latest>1580198400</latest> </search> <row> <panel> <table> <search base="allHeaders"> <query>search "from=nagios" | rex field=_raw "^(?&lt;date&gt;.*) uuid=(?&lt;uuid&gt;.*) from=" | table date,uuid</query> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <set token="uuid_selected">$row.uuid$</set> </drilldown> </table> </panel> </row> <row> <panel> <chart depends="$uuid_selected$"> <title> UUID: $uuid_selected$ </title> <search base="allHeaders"> <query>search uuid=$uuid_selected$ | rex field=_raw "^(?&lt;date&gt;.*) uuid=(?&lt;uuid&gt;.*) from=(?&lt;from&gt;.*) to=(?&lt;to&gt;.*) delay=(?&lt;delay&gt;.*)" | strcat "from " from " to " to hop | sort +_time | table hop, delay</query> </search> <option name="charting.chart">bar</option> <option name="charting.drilldown">none</option> </chart> </panel> </row> </dashboard>         Dashboard with No Results Found After I click on the Open in Search After I click on the Visualizations tab
The Splunk Fundamentals Part 1, Module 5 "Using Search" video says that both selecting and zooming into the timeline with the Zoom to Selection button reuses the same search results and does not redo... See more...
The Splunk Fundamentals Part 1, Module 5 "Using Search" video says that both selecting and zooming into the timeline with the Zoom to Selection button reuses the same search results and does not redo the search. However, according to the Fundamentals PDF, page 67-68 it states that selecting a narrower time will not re-execute the search while zooming in with Zoom to Selection will re-execute the search.        The Splunk documentation does not clarify. "When you use the timeline to investigate events, you are not running a new search. You are filtering the existing search results." "When you select a set of bars on the timeline and click Zoom to Selection, your search results are filtered to show only the selected time period. The timeline and events list update to show the results of your selection." The documentation does not state that Zooming Out re-executes the search, but we know that is the case. It simply states that it chooses new times for the Time Range Picker. Can we assume that when new times are chosen for the Time Range Picker, a new search is executed for the new times? But if that is the case, then that means Zooming In or Zoom to Select will also re-execute the search. When actually testing Splunk's timeline for Zooming Out and Zoom to Selection, I can see that all of the previous search results disappear, my page refreshes, and new results are displayed. Doesn't that mean the search has been re-executed? Whereas when I simply select a timeframe in the timeline (but do not press Zoom to Selection), the results change to show only the related events, but the page does not refresh. Some official clarification or even perhaps an update of the Splunk training would be greatly appreciated.
What is typically the best way to do splunk searches that following logic.  First Search (get list of hosts) Get Results Second Search (For each result perform another search, such as find list o... See more...
What is typically the best way to do splunk searches that following logic.  First Search (get list of hosts) Get Results Second Search (For each result perform another search, such as find list of vulnerabilities   My example is searching Qualys Vulnerability Data. Searching HTTP Headers first and including Tag results in search query   index=qualys QID=48118 [search index=qualys "WebLogic RCE - CVE-2020-14882" | dedup IP | table IP] | stats latest(_time) values(DNS) values(RESULTS) by IP   The issue with this search is that it doesn’t include systems with the RCE tag but no QID=48118 (HTTP Headers Data) Searching TAG first then Join   index=qualys "WebLogic RCE - CVE-2020-14882" | dedup IP | table IP, DNS | join type=left IP [search index=qualys QID=48118 | dedup IP RESULTS] | stats values(DNS), values(RESULTS) by IP   The issue here is that I only get back one HTTP RESULT, there should be a few for each port that is open. Any links to the best way to create subsearches from results would be great for learning.  Thanks.
Hello! I have some JSON events that each look something like this: { "id": 12345, "steps": [ { "stepName": "A", "stepDuration": 0.5 }, { "stepName": "B", "ste... See more...
Hello! I have some JSON events that each look something like this: { "id": 12345, "steps": [ { "stepName": "A", "stepDuration": 0.5 }, { "stepName": "B", "stepDuration": 0.17 } ] } My existing searches are set up to do a mvexpand() based on the steps field such that each step becomes its own event which I am able to manipulate. This works great for small numbers of events, but when I am processing thousands of events with 100+ steps each, I am quickly running into the memory limitations imposed on the mvexpand function by default. Is there an alternative function that I am missing that I can use to compute summary statistics such as "the average duration of step A is X.XX" and "X% of events hit step B"? If not, is there a better way to structure the events themselves to support this? My constraint is that I need to allow for arbitrary numbers of steps occurring in an arbitrary order that needs to be preserved. Thanks in advance!
Hi All, We have an IDM in our cloud environment and we would like to ingest data & logs from Teams with the Add-On installed there, has anyone had any success doing this or is the only way to util... See more...
Hi All, We have an IDM in our cloud environment and we would like to ingest data & logs from Teams with the Add-On installed there, has anyone had any success doing this or is the only way to utilise a Heavy Forwarder?
Hi all, I want to integrate 5000 network elements into splunk via syslog. So, 5000 directories will be created where the data is collected respectively. Could anyone suggest me how to write inputs.c... See more...
Hi all, I want to integrate 5000 network elements into splunk via syslog. So, 5000 directories will be created where the data is collected respectively. Could anyone suggest me how to write inputs.conf to monitor all these 5000 directories.   Its quite difficult to write 5000 stanzas. Looking for a simplied solution. Please help
Hi,I want to send a csv pure data file as it is to splunk via Http Event Collector. How can I do it? Can I send it to /raw or /event or something? How to make splunk to parse it properly? Thanks
Hi All, There is a strange issue that I am facing regarding tstats. When I run the query using |from datamodle: it gives the proper result and all expected fields are reflecting in result. But whe... See more...
Hi All, There is a strange issue that I am facing regarding tstats. When I run the query using |from datamodle: it gives the proper result and all expected fields are reflecting in result. But when I run same query with |tstats summariesonly=true it doesn't give any result. Any idea what to check and how I can resolve this issue.   Thanks, Bhaskar