All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Are subscription and perpetual licenses compatible i.e. can the live on the same license server? Also, is it possible to for an indexer to pull from 2 difference license servers?  This is the scenar... See more...
Are subscription and perpetual licenses compatible i.e. can the live on the same license server? Also, is it possible to for an indexer to pull from 2 difference license servers?  This is the scenario.  There is a large pool of indexes drawing from a subscription license.  Our team wants to use the Splunk infrastructure with our own legacy perpetual license.  What is the best way to do it without spinning up a new environment?  Thanks
(Although this example will use Splunk's Lookup Editor app, it applies to custom REST commands in general.) I am using the Lookup Editor provided by Splunk from SplunkBase (authored by @LukeMurphey)... See more...
(Although this example will use Splunk's Lookup Editor app, it applies to custom REST commands in general.) I am using the Lookup Editor provided by Splunk from SplunkBase (authored by @LukeMurphey) and am saving the lookups with the user in context, which saves it as a private user-scoped artifact. I would like to make it globally shared while retaining the lookup's ownership information, so I attempt the following after the rest call that saves the lookup:   # Set ACL to make lookup globally shared url = '/servicesNS/%s/%s/data/lookup-table-files/%s/acl' % (owner, namespace, lookup_file) postargs = { 'owner': owner, 'sharing': 'global' } rest.simpleRequest(url, postargs=postargs, sessionKey=key, raiseAllErrors=True)   When using the lookup editor as an admin, it will work fine as the admin has the capabilities to make such modifications; however, users are not able to do so and the call fails. How can I programmatically make this REST call as an admin? I am hoping that as the script is on the server-side, I won't have to authenticate as an admin as I imagine that would require having to store credentials, which would be tricky as we have several clusters. I have also tried making the REST call using the Splunk binary as mentioned in this Answer, but it had no effect.
I want to parse nested json at index time , what will be the props and trandform.I want separate all messages fields in seperate line { [-]    id: 3614979212324797096956714454    message: {"@t":"2... See more...
I want to parse nested json at index time , what will be the props and trandform.I want separate all messages fields in seperate line { [-]    id: 3614979212324797096956714454    message: {"@t":"2021-05-14T17:19:02.0149138Z","@m":"Upload metrics: \"{ duration = 81.9555, productCode = ct, tenantCode = , validBundle = True, validProductCode = True, validTenantCode = , bundleSize = 9670, successful = True }\"","@i":"0b918ffa","@l":"Information","@lt":"dev","metrics":"{ duration = 81.9555, productCode = ct, tenantCode = , validBundle = True, validProductCode = True, validTenantCode = , bundleSize = 9670, successful = True }","SourceContext":"Atlas.FhirStore.Api.Services.MetricsFhirResourceService","ActionId":"43adca80-545-4b1f-b9dd-d4008f3594b3","ActionName":"Atlas.FhirStore.Api.Controllers.FhirResourceController.UploadBundle (Atlas.FhirStore.Api)","RequestId":"0HM8MV64LIURF","RequestPath":"/api/v1/CT/bundle","SpanId":"|eb806e4b-47275043ec09ec97.2.a9d44dc9_","TraceId":"eb806e4b-47275043ec09ec97","ParentId":"|eb806e4b-47275043ec09ec97.2.","ThreadId":14,"X-Correlation-Id":"0HM8K136VRBAK:00000156","X-Correlation-Name":"IntegrationHubService"}    timestamp: 1621012742015 }
I have logs with data in two fields: _raw and _time. I want to search the _raw field for an IP in a specific pattern and return a URL the follows the IP. I'd like to see it in a table in one column n... See more...
I have logs with data in two fields: _raw and _time. I want to search the _raw field for an IP in a specific pattern and return a URL the follows the IP. I'd like to see it in a table in one column named "url" and also show the date/time a second column using the contents of the _time field. Here's an example of the data in _raw:   [1.2.3.4 lookup] : http://www.dummy-url.com/ --   I'd like to use a query like the following which will look for a specified IP and return the URL that follows after the colon:   rex field=_raw "1.2.3.4 lookup\] \: (?<url>[\w\:\/\.\-]+)"   The datasource looks like this:   sourcetype="datasource.out"   Can you help me with a query that searches for the IP and returns the URL (from _raw) and date/time (from _time) in table format? Thanks!
Please I need detailed step-by-step processes on how I can deploy splunk apps and addons from the github(gitlab server) to doing it on the splunk deployment linux server(backend) and updating the cha... See more...
Please I need detailed step-by-step processes on how I can deploy splunk apps and addons from the github(gitlab server) to doing it on the splunk deployment linux server(backend) and updating the changes in github. I know I should first log into gitlab and generate SSH keys and that this SSH keys is copied to our splunk deployment server(backend) in the command line. Please give me the detailed step-by-step process to accomplish this task.
I have a scripted input, which run the command ntpstat and the results are sent to os Index. When the ntp daemon is not running, there is an error message of "Unable to talk to NTP daemon. Is it run... See more...
I have a scripted input, which run the command ntpstat and the results are sent to os Index. When the ntp daemon is not running, there is an error message of "Unable to talk to NTP daemon. Is it running?", which gets indexed to _internal. Is there away to re-direct error messages to the os Index?
Here is the inputs.conf entry:   [batch://opt/splunk/var/run/splunk/csv/*.csv] disabled = false move_policy = sinkhole index = test-metrics sourcetype = metrics_csv     However, as I monitor /op... See more...
Here is the inputs.conf entry:   [batch://opt/splunk/var/run/splunk/csv/*.csv] disabled = false move_policy = sinkhole index = test-metrics sourcetype = metrics_csv     However, as I monitor /opt/splunk/var/run/splunk/csv/ I see the CSV files are still there, and not getting indexed. This should have been a really simple test, but can't figure out why batch is not working. If I hardcode a specific CSV file it works:   [batch://opt/splunk/var/run/splunk/csv/test.csv] disabled = false move_policy = sinkhole index = test-metrics sourcetype = metrics_csv     But obviously I need it to get all the CSV files, so I should be able to use the wildcard *.csv
I figured out how to add blacklisting in for forwarders (deployed apps), but where is this configured on the splunk server itself?  This is version 7.2.9.1.  The splunk server is windows.  I though i... See more...
I figured out how to add blacklisting in for forwarders (deployed apps), but where is this configured on the splunk server itself?  This is version 7.2.9.1.  The splunk server is windows.  I though it could be d:\program files\splunk\etc\system\local\inputs.conf, but that file looks very different than the one that is deployed to forwarders.
Hello Splunkers, Is there a way to restrict web-ui access ? Users shoud not be able to view any options/menus to choose from.  The dashboard links are shared with users and they should only be able... See more...
Hello Splunkers, Is there a way to restrict web-ui access ? Users shoud not be able to view any options/menus to choose from.  The dashboard links are shared with users and they should only be able to load the dashboard without any search capablities ( dashload loading will trigger searches but no manual search from panels  - normally we see search icon at the bottom right corner of the panel)
I have a Cluster Master with a couple of indexers in a cluster.  I have a search head that obviously references the indexers.  I need to update MMDB and have been able to download the file.  I am goi... See more...
I have a Cluster Master with a couple of indexers in a cluster.  I have a search head that obviously references the indexers.  I need to update MMDB and have been able to download the file.  I am going to follow various other guides that I have found around updating this process by basically doing something like https://github.com/georgestarcher/TA-geoip.  The intent was to Create a new "app" called something similar on my cluster master under "Master Apps". Push the app via cluster master to my local indexers and let the cluster push this as a "Slave App" to the indexers. The SPLUNK doco advises that "iplocation" is a distributed search, meaning it will use one of the indexers when the query is run.  What is not clear is whether or not I need to update it on my Search Heads as well.  If it is using the indexers when the command is run, is there a need to do in on the search heads as well?  I suspect the answer with most things like this is to just updated it on the indexers and search heads to avoid unwanted problems but thought I would check in case I missed something.  
new to Splunk so want to know how I can fetch total time take per request  applog.msg=XXXX_Logs,CorrelationId=XXXXXXXXXX,URL=XXXX.com,ServiceKey=xyzService,No_Of_Requests=4,Total_Time_Taken=3 Total... See more...
new to Splunk so want to know how I can fetch total time take per request  applog.msg=XXXX_Logs,CorrelationId=XXXXXXXXXX,URL=XXXX.com,ServiceKey=xyzService,No_Of_Requests=4,Total_Time_Taken=3 Total time taken = Total_Time_Taken/ No_Of_Requests
When i try to extract BiosMake fields in my log file with field extraction (Mode regex).I have this:Error in 'rex' command: regex="^\w+="\d+\.\d+\.\d+\.\d+"\s+\w+=\w+\d+\s+\w+=\d+\s+\w+=\w+\-\w+\s+\d... See more...
When i try to extract BiosMake fields in my log file with field extraction (Mode regex).I have this:Error in 'rex' command: regex="^\w+="\d+\.\d+\.\d+\.\d+"\s+\w+=\w+\d+\s+\w+=\d+\s+\w+=\w+\-\w+\s+\d+\-\w+\s+\w+=\d+\.\d+\s+\w+=\w+\d+\s+\w+\.\s+\d+\.\d+\.\d+\s+\w+=\d+\s+\w+=\d+\w+\d+\s+\d+:\d+:\d+\.\d+\s+\w+=\w+\.\w+\.\w+\\\w+\-\w+\-\w+\d+\w+\s+\w+=\w+\d+\s+\w+\s+\d+\s+\w+\s+\w+=\w+\s+\w+\s+\w+=\w+\d+\s+\w+=(?P<volumeEncryptionState>\w+)" has exceeded configured match_limit, consider raising the value in limits.conf this is my log:   AgentVersion="2.5.1126.0" ComputerManufacturerName=ASDA3101705 iscompliant=1 policyCipherStrength=AES-CBC 128-Bit TpmVersion=1.4 BiosVersion=N75 Ver. 01.33 Id=292629 LatestEntry=2021May14 14:31:36.077 MachinesUsersNames=eu.airbus.corp\TA-ADMIN-ST40783 OperatingSystemName=ASDA3101705 Windows 10 Enterprise ComputerType=Portable Name=ASDA3101705 volumeEncryptionState=Encrypting TpmMake=IFX  BiosMake=Phoenix Technologies LTD
When i want to extract BiosMake fields with fields extraction.I have this error: Error in 'rex' command
I would like to listed those events (reuirements) which state are changed to Agreed from last 3 days. Today have a database with 4 requirements which state is Agreed. Example: /   ID_3 is change... See more...
I would like to listed those events (reuirements) which state are changed to Agreed from last 3 days. Today have a database with 4 requirements which state is Agreed. Example: /   ID_3 is changed to Agreed / _time    ID   Req_status 05/14   1     Agreed 05/14   2    Agreed 05/14   3     Agreed 05/14   4     Agreed /  05/13 only 3 requirements which state Agreed ID_2 is changed to Agreed at 05/12 / _time    ID   Req_status 05/13   1     Agreed 05/13   2    Agreed 05/13   4    Agreed /  05/12 only 2 requirements which state Agreed  ID_1 is changed to Agreed at 05/12/ _time    ID   Req_status 05/12   1     Agreed 05/12   4    Agreed /  05/11 only 1 requirements which state Agreed / _time    ID   Req_status 05/11   4    Agreed Expectations:  The last 3 days has arrived 3 new requirements. I would like to listed those requirements which arrives newly. 05/13   1     Agreed 05/13   2    Agreed 05/13   3    Agreed Thanks for help!
I have 2 Dashboards A and B,  They both have Dropdown Inputs for the host field and can be used individually. But Dashboard B is also mainly used as a Detail Drilldown for 1 search of Dashboard A. So... See more...
I have 2 Dashboards A and B,  They both have Dropdown Inputs for the host field and can be used individually. But Dashboard B is also mainly used as a Detail Drilldown for 1 search of Dashboard A. So what i want is when i click on the Drilldown in Dashboard A is that my Token value for the host field in Dasboard B is set to the token value of host of Dasboard A. Thanks in advance for any help.
Hi, Any suggestion about how can I collect avgLoad1m for each cpu core (hosts with multi-core cpu) by Splunk_TA_nix app? As this app uses vmstat_metric.sh to collect  data, I need to collect the cpu... See more...
Hi, Any suggestion about how can I collect avgLoad1m for each cpu core (hosts with multi-core cpu) by Splunk_TA_nix app? As this app uses vmstat_metric.sh to collect  data, I need to collect the cpu core info as  Dimensions, but it just collect these 3 dims: IP_address ,OS_name , OS_version I run this search to get _dims values: | mcatalog values(_dims) as dimensions WHERE index=<MY_INDEX> sourcetype="vmstat_metric" host=<MY_HOST> by sourcetype host index Thanks,
Hi All, I need help, I installed the service now add-on and I know this add on push incident on the folliwing table. x_splu2_splunk_ser_u_splunk_incident There is a way to change this table? I do... See more...
Hi All, I need help, I installed the service now add-on and I know this add on push incident on the folliwing table. x_splu2_splunk_ser_u_splunk_incident There is a way to change this table? I don't know if this modification is on splunk site or service now site Thanks in advance  
Hi, I have an issue with a query of mine.  The length of it is exactly 378 lines, and however I managed to save it on my dashopard without any problems. Now I can not open  it from there. Always show... See more...
Hi, I have an issue with a query of mine.  The length of it is exactly 378 lines, and however I managed to save it on my dashopard without any problems. Now I can not open  it from there. Always shows to me a "connection reset" blank page when I try to open it. I guess it is because my query is too long (it has hundreds of "like" conditions in it.  In case of other queries I  don't experience such issue.   I have saved the query into a word document, and whenever I run it, it runs perfectly, no problems with that. Could you tell me please what can I do in such case, when I have this long query? And what is the maximum length of a query?    Thank you in advance  
Hi there, We are using ADQL to get transaction in series.  e.g.  SELECT series(eventTimestamp, '6m'), avg(responseTime) AS "Response Time (second)" FROM transactions WHERE application="xxx" AND tra... See more...
Hi there, We are using ADQL to get transaction in series.  e.g.  SELECT series(eventTimestamp, '6m'), avg(responseTime) AS "Response Time (second)" FROM transactions WHERE application="xxx" AND transactionName IN ("transaction-name") but we can't find anything. Actually,  we can't find them even remove all where clause.  Did anybody know how we can get the data  as "Applicaton->Business-Transaction->Transaction snaphosts? 
I want to set maxTotalDataSizeMB to 2000000 (~2TB). Is there are more human readable way of writing this? e.g. 2,000,000 2_000_000 2e6