All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

1 search in a dashboard ends with "waiting for data" for 3 of about 300 organisations. The organisation-name is part of the url.  The search ends correctly for most of the organisations. After refr... See more...
1 search in a dashboard ends with "waiting for data" for 3 of about 300 organisations. The organisation-name is part of the url.  The search ends correctly for most of the organisations. After refreshing the search (in the dashboard) or clicking on the magnifying glass the result of the search is shown.  Job inspection gives no error. Any idea why what can be the reason that the result is not shown in the dashboard directly?
Hello Splunkers! I am collecting logs from multiple devices, a couple of them have different timezones, so I followed the instructions listed in the following link: https://docs.splunk.com/Docume... See more...
Hello Splunkers! I am collecting logs from multiple devices, a couple of them have different timezones, so I followed the instructions listed in the following link: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Applytimezoneoffsetstotimestamps#:~:text=To%20determine%20the%20time%20zone,TZ%20attribute%20set%20in%20props.   What I did was: [source::cisco] TZ = US/Eastern   The timestamp after this change is like this:   Instead of becoming 10:00 AM it did 7:00 AM +3:00   How can this be changed?
How can I extract all the data listed inside a dashboard using python SDK?
  Hi,  I have created table with host and grouped IP address the host will have public and private IP address So my table look like this Host             IP                      id Host A       ... See more...
  Hi,  I have created table with host and grouped IP address the host will have public and private IP address So my table look like this Host             IP                      id Host A        10.1.1.1         21                       172.1.1.1        i have ip range to identify the public ip. i need to create another field which if the range is match mean the result will be yes if not no i have used this query for the field  | eval "internet facing"=case(cidrmatch(172.1.1.0/24" , IP) , "Yes" , 1=1, "No") but this eval only work on field which have 1 IP. in my group ip field, its not working. Please assist on this. Thank you
How do I change the colors of the destination nodes in the network diagram viz app especially if they are not present in the source column? For example, if I try | eval color=case(ip_dst="some_ip", "... See more...
How do I change the colors of the destination nodes in the network diagram viz app especially if they are not present in the source column? For example, if I try | eval color=case(ip_dst="some_ip", "blue").....nothing happens.
Hi Splunk Experts, I'm trying to list all the events on same timestamp and trying to capture only the required lines. But I'm not getting the expected results, seems like there is no "\n" in the ag... See more...
Hi Splunk Experts, I'm trying to list all the events on same timestamp and trying to capture only the required lines. But I'm not getting the expected results, seems like there is no "\n" in the aggregated event eventhough it breaks into new lines. Kindly shred some lights. Thanks in advance!!   I've events something like below, after aggregating them by _time:   Line1 blablabla Line2 blablabla <Interested line1> <Interested line2> <Interested line3> <Ends Here> Unwanted Line blablabla   Query Using:   index=xxx | reverse | stats list(_raw) as raw by _time | rex field=raw "(?<Events>(\<Interested.*)((\n.*)?)+\<Ends Here\>)"   Result for the Above query:   <Interested line1>    
Hi all, I created a lookup 6 months ago and now i have hundreds of lookup and i forgot what was it's name. I am looking for an IP address in which lookup it is but i couldn't find a way to do this. ... See more...
Hi all, I created a lookup 6 months ago and now i have hundreds of lookup and i forgot what was it's name. I am looking for an IP address in which lookup it is but i couldn't find a way to do this. I want to find out which lookup an IP address is in. Any help would be appreciated!
Hello  I'm trying to figure out How can I use kinda if...else condition in my Splunk query. I've set up two metrics, which are sending data to Splunk. Each matrix have different index value.  Fo... See more...
Hello  I'm trying to figure out How can I use kinda if...else condition in my Splunk query. I've set up two metrics, which are sending data to Splunk. Each matrix have different index value.  For Example: For Matrix A the index is "index=aData" and for Metric B index is "index=bData". Currently in Splunk I'm seeing duplicate data because both metrics are sending same value. So what I'm trying to achieve is:  1. First look for data if coming from "index=aData" 2. If able to see data from index "aData" show me the results  3. else check the data from "bData" (Not looking for "OR " condition)  Results should show the data only from 1 index to avoid duplicity.   
Splunk dashboard: We have a dropdown with 2 possible values, option1 and option2. Based on what user selects, ( option1: "A" or "B" ) gets added to both base-query and query OR  option2: ("X" or "... See more...
Splunk dashboard: We have a dropdown with 2 possible values, option1 and option2. Based on what user selects, ( option1: "A" or "B" ) gets added to both base-query and query OR  option2: ("X" or "Y") gets added to both base-query and query.  1. If user selects "option1", query is <search id="base_query"> <query>index=logs sourcetype=ci "Shipping Finished" ("A" OR "B") ...</query> <search base="base_query"> <query> | join some_field [ search index=logs sourcetype=ci | search ("A" OR "B") AND "Received complete status" 2. If user selects "option2", query is: <search id="base_query"> <query>index=logs sourcetype=ci "Shipping Finished" ("X" OR "Y") ... </query> <search base="base_query"> <query> | join some_field [ search index=logs sourcetype=ci | search ("X" OR "Y") AND "Received complete status"
Hello, I am attempting to install the Splunk Stream but am running into issues after installing the necessary packages. I am installing the Stream App on a standalone Splunk instance on a VM and hav... See more...
Hello, I am attempting to install the Splunk Stream but am running into issues after installing the necessary packages. I am installing the Stream App on a standalone Splunk instance on a VM and have tried on Ubuntu 22.04, Windows 10, Windows 2019 Server both on-premise and in AWS/Azure and am running to the exact same issue.  After installing the Splunk App for Stream, Wire Data add-on, and Stream Forwarder add-on as instructed on the link below,  when I check the 'Collect data from this machine using Wire Data input (Splunk_TA_stream)', I get the following error:  Failed to detect Splunk_TA_stream status.  https://docs.splunk.com/Documentation/StreamApp/7.4.0/DeployStreamApp/InstallSplunkAppforStreaminasingleinstance#:~:text=of%20Splunk%20Enterprise.-,Set%20up%20data%20collection%20on%20the%20local%20machine,-Select%20the%20Collect Pressing 'Redetect' does not help and running the permissions.sh script does not change anything. The Splunk instance itself is a fresh install (no additional configurations) and no other Apps besides Stream and its required add-ons have been installed. Can someone please hep provide an explanation to this error code I am getting and why it is happened, regardless of which OS I am using? Is there additional steps I must complete? Any guidance is appreciated. The workflow I have done is as follows: 1. deploy VM (on-prem or cloud, I have used both Ubuntu 22.07 and Windows) 2. install Splunk Enterprise on new VM 3. install Splunk App for Stream, Wire Data add-on, and Stream Forwarder 4. Restart the Splunk instance
Please indicate an application available in the splunk store (Find more Apps), preferably free. What possibility to establish authentication to an api type bearer? I installed the "REST API Modular... See more...
Please indicate an application available in the splunk store (Find more Apps), preferably free. What possibility to establish authentication to an api type bearer? I installed the "REST API Modular Input" app, but the activation key needs to be purchased.
Is it possible to create notable events in Splunk Cloud or is it only native to Enterprise Security?  The detection rule below is creating actions=risk, notable and assigning some parameters in the n... See more...
Is it possible to create notable events in Splunk Cloud or is it only native to Enterprise Security?  The detection rule below is creating actions=risk, notable and assigning some parameters in the notable event. Is it possible to implement this rule as it is with actions notable events in Splunk Cloud or is it only possible in Enterprise Security? I know the alert can be created in Splunk Cloud with its alerting feature, but I am wondering if we need to modify the actions part of the detection rule if notable events do not exist in Splunk Cloud. Thank you. [Possible Remote Administration Tools Detected (via office365)] alert.severity = 3 description = Remote administration tool is software that helps the administrator or attacker to receive full control of the targeted device. cron_schedule = 0 * * * * disabled = 1 is_scheduled = 1 is_visible = 1 dispatch.earliest_time = -60m@m dispatch.latest_time = now search = index=* ((Operation="FileUploaded" OR Operation="FileAccessed" OR Operation="FileDownloaded") alert.suppress = 0 alert.track = 1 actions = risk,notable action.risk = 1 action.risk.param._risk_object_type = user action.risk.param._risk_score = 75 action.correlationsearch = 0 action.correlationsearch.enabled = 1 action.notable.param.rule_title = Possible Remote Administration Tools Detected (via office365) action.notable.param.rule_description = Remote administration tool is software that helps the administrator or attacker to receive full control of the targeted device.  action.correlationsearch.label = Possible Remote Administration Tools Detected (via office365) action.correlationsearch.annotations = {"mitre_attack": ["T1204"]}
Is there an SBOM released for Splunk and ideally for all the apps and add ons in splunkbase? We are looking to create an SBOM where splunk is part of our solution and as a result need an SBOM for spl... See more...
Is there an SBOM released for Splunk and ideally for all the apps and add ons in splunkbase? We are looking to create an SBOM where splunk is part of our solution and as a result need an SBOM for splunk itself. Any pointers are appreciated.  https://www.splunk.com/en_us/blog/learn/sbom-software-bill-of-materials.html
Just installed Splunk App for Lookup File Editing 4.0.1 in Splunk Enterprise 9.0.5. The app loads after restart.  But it gives “The lookup could not be loaded from the server” when I try to open an e... See more...
Just installed Splunk App for Lookup File Editing 4.0.1 in Splunk Enterprise 9.0.5. The app loads after restart.  But it gives “The lookup could not be loaded from the server” when I try to open an existing lookup; it gives the same error after I click “Save” when I create a new lookup.  The file is created; but a corresponding lookup definition is not.  How do I make the app work? Following a suggestion in https://community.splunk.com/t5/All-Apps-and-Add-ons/Upgraded-Lookup-Editor-3-0-5-Errors-String-value-too-long-and/m-p/445645#M68591, I performed a search       index=_internal (sourcetype="lookup_editor_controller" OR sourcetype=lookup_editor_rest_handler OR sourcetype=lookup_backups_rest_handler) testedit       The only error entry reads       ERROR force lookup replication failed: user=admin, namespace=search, lookup_file=testedit, details=a bytes-like object is required, not 'str' Traceback (most recent call last): File "/opt/splunk/etc/apps/lookup_editor/bin/lookup_editor/__init__.py", line 419, in update self.force_lookup_replication(namespace, lookup_file, session_key) File "/opt/splunk/etc/apps/lookup_editor/bin/lookup_editor/__init__.py", line 295, in force_lookup_replication if 'No local ConfRepo registered' in content: TypeError: a bytes-like object is required, not 'str'       Before this error, there were two DEBUG entries and one INFO.  In chronological order:       DEBUG destination_lookup_full_path=/opt/splunk/etc/apps/search/lookups/testedit DEBUG Creating a new lookup file, user=nobody, namespace=search, lookup_file=testedit, path="/opt/splunk/var/run/splunk/lookup_tmp/lookup_gen_20230818_181212_7r7p4o8s.txt" INFO Lookup created successfully, user=admin, namespace=search, lookup_file=testedit, path="/opt/splunk/etc/apps/search/lookups/testedit"       After I manually define a lookup with this file, I am able to use it.  But the editor still cannot open it.
I have an indexer RHEL7 server that is DEAD.  I have no way of getting into it to run any commands.  I was able to remove it from the Index Cluster using:  splunk remove cluster-peers -peers <guid> ... See more...
I have an indexer RHEL7 server that is DEAD.  I have no way of getting into it to run any commands.  I was able to remove it from the Index Cluster using:  splunk remove cluster-peers -peers <guid>  However, it is still in the Monitoring Console as an instance unreachable.  How can I fully remove it?
I have a Splunk container for development (Dev).  I want to import a slice of data from one index of my production Splunk (Prod) to this container so I can write searches against that data exactly as... See more...
I have a Splunk container for development (Dev).  I want to import a slice of data from one index of my production Splunk (Prod) to this container so I can write searches against that data exactly as it appears in Prod.  Using Export on Prod and Import on Dev is not producing my desired outcome.  Doing this as a single file with a single indexing is creating logs that are indexing the container hostname as the host not the host of the data itself.  The data in the Prod index is of varying sourcetypes so the import is also only creating the sourcetype of the import file, not tha sourcetype from the data itself.  I'm looking at possibly using the  EventGen app but not sure if this will do what I'm trying to do. Is what I'm doing possible?  I do not want the entire prod index. I do not want to rsync or otherwise go to the backend to move data.   EDIT: I modified the title, it seems I want the raw data and metadata to all come over in one package?
I'm trying to create an SPL which will give me the results as per below: Search for all users for have visited "store.com" but for those user who visited nzcompany.com then don't display that user i... See more...
I'm trying to create an SPL which will give me the results as per below: Search for all users for have visited "store.com" but for those user who visited nzcompany.com then don't display that user in a table. (although they did visit store.com) User URL Brad store.com Tom  store.com Bart nzcompany.com Lisa store.com Bart store.com Tom  store.com Lisa store.com Lisa nzcompany.com Lisa store.com     Results   Tom   Brad     i tried to do this but didnt work index=network (url=store.com AND url!=nzcompany.com) |  table user   Thanks
Hi,  I am building alert in Splunk. I have a log with 6 different variables, but I am actually interested only in 4 of them (A, B, C and D). Those variables usually have a value which is a number li... See more...
Hi,  I am building alert in Splunk. I have a log with 6 different variables, but I am actually interested only in 4 of them (A, B, C and D). Those variables usually have a value which is a number like 50 but it can also be 'unknown' this is a log sample event: {        responseStatus: 200,        calculationBreakdown: {          evaluation: {             A: unknown            B: unknown            C: unknown            D: unknown            E: 50            F: unknown            }       } } I am trying to do the stats for number of 'unknown' values for each variable and total calls; Then I can calculate percentage of 'unknown' for each which is treated as error and then fire my alert based on those stats So I tried simple query: index=someIndex app=someApp event.responseStatus=200  | stats count as total,  sum(eval(if('event.calculationBreakdown.evaluation.A'==unknown, 1, 0))) as total_errors_for_A wanted to do the same for errors for B, C and D, but this does not work at all, it just calculates total for all the requests but 0 for errors_A and I know there are some A =. unknown in the stats so it should be counted;  when I change sum to count it shows the same number in both columns for total and for total_errors_for_A I also tried different quotes for 'event.calculationBreakdown.evaluation.A' and unknown, single / double / no quotes also added spath 'event.calculationBreakdown.evaluation.A' before | stats but that does not change anything Is anyone able to help? I am pretty sure it is something super simple, but my mind goes blank thanks a million  
Hello, When I try to start the watchdog, it does not start and gets the following error. Because the watchdog does not start, db replication does not work either. Sql Thread for relay log not runni... See more...
Hello, When I try to start the watchdog, it does not start and gets the following error. Because the watchdog does not start, db replication does not work either. Sql Thread for relay log not running -replication error
I'm looking for a way to search all indexes available for each role in Splunk (including access inherited from other roles). This search almost does this:       | rest /servicesNS/-/-/auth... See more...
I'm looking for a way to search all indexes available for each role in Splunk (including access inherited from other roles). This search almost does this:       | rest /servicesNS/-/-/authorization/roles count=0 splunk_server=local | fields title,srchIndexesAllowed | rename srchIndexesAllowed as Indexes, title as Role | search Indexes=*       However, this does not account for inherited indexes. Listing indexes available for a single role is fairly easy (but time consuming): Under Settings -> Roles ->  Select a role (or Edit) Open "Indexes" Tab Filter "Show Selected" from the far right column. ----------------------- Is there a way to get this list (for all roles) from SQL?