All Topics

Top

All Topics

I have 3 standalone indexers, and another 3 indexers in a cluster.  We want to decommission the 3 standalones but first, have to move the data off the 3  onto the cluster.   I imagine the process wou... See more...
I have 3 standalone indexers, and another 3 indexers in a cluster.  We want to decommission the 3 standalones but first, have to move the data off the 3  onto the cluster.   I imagine the process would be something like to roll all hot buckets to warm.. then rsync the warm and cold mounts/directory to a temp directory on one of the idx cluster members? standalone 1 to idxcluster 1,, 2 to 2, then 3 to 3.. But when we do rsync the data over.. How do i get the new indexer to recognize the old imported data?  is it as simple as merging the old imported data into the appropriate index directory on the new indexer?  for example.. copy the old wineventlog index, into the same named directory on the  new indexer? would that work or is there  more to it?  Is there some kind of splunk native command to move all data from idx A to idx B? Is there a better (or correct) way to make the new idx recognize the imported data? I appreciate any help!  Thanks.
The java agent doesn't always understand some thread handoffs or exit calls.  There are examples in the documentation for how to use the AppD API to handle this.  However I would like to know if it c... See more...
The java agent doesn't always understand some thread handoffs or exit calls.  There are examples in the documentation for how to use the AppD API to handle this.  However I would like to know if it can be done in a vendor-neutral way.  For example, with other APM agents, it's sometimes possible to use OTel manual propagation and it "just works."  Can this be done with the AppD java agent? thanks
I have a data visualized as a table.  name value time App1 123 5s App2 0 2s App3 111 10s   I know that drilldown option can be used to go through other pages and pass tokens... See more...
I have a data visualized as a table.  name value time App1 123 5s App2 0 2s App3 111 10s   I know that drilldown option can be used to go through other pages and pass tokens to other dashboard. Currently I set clicking a row will open Value dashboard. However, is there any way that I can drilldown to multiple dashboards based on which column I clicked? (e.g. Clicking value column sends to Value dashboard, clicking name column sends to Name dashboard, etc)?
Firewall logs needs some purification for threat monitoring, below are couple events,  From the events below action=Accept AND Service=23 along with protection_type=geo_protection, we need "protecti... See more...
Firewall logs needs some purification for threat monitoring, below are couple events,  From the events below action=Accept AND Service=23 along with protection_type=geo_protection, we need "protection_type=geo_protection" to be removed from raw in indextime extraction. Current:   2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513220|hostname=firewallhost|product=Firewall|action=Accept|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=000|time=1700513220|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|protection_type=geo_protection|proto=99|s_port=1234|service=23|src=111.11.1.111|src_country=Other 2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513221|hostname=firewallhost|product=Firewall|action=Accept|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=00|time=1700513221|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|protection_type=geo_protection|proto=99|s_port=1234|service=23|src=111.11.1.111|src_country=Other 2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513221|hostname=firewallhost|product=Firewall|action=Accept|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=00|time=1700513221|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|protection_type=geo_protection|proto=99|s_port=1234|service=23|src=111.11.1.111|src_country=Other 2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513221|hostname=firewallhost|product=Firewall|action=Denied|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=00|time=1700513221|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|protection_type=geo_protection|proto=99|s_port=1234|service=67|src=111.11.1.111|src_country=Other   Expected:   2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513220|hostname=firewallhost|product=Firewall|action=Accept|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=000|time=1700513220|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|---|proto=99|s_port=1234|service=23|src=111.11.1.111|src_country=Other 2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513221|hostname=firewallhost|product=Firewall|action=Accept|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=00|time=1700513221|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|---|proto=99|s_port=1234|service=23|src=111.11.1.111|src_country=Other 2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513221|hostname=firewallhost|product=Firewall|action=Accept|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=00|time=1700513221|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|---|proto=99|s_port=1234|service=23|src=111.11.1.111|src_country=Other 2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513221|hostname=firewallhost|product=Firewall|action=Denied|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=00|time=1700513221|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|protection_type=geo_protection|proto=99|s_port=1234|service=67|src=111.11.1.111|src_country=Other   Thanks in Advance!
Hi all!   What I thought was going to be a fairly simple panel on a dashboard has been giving me fits.  We have a global time picker (Datepkr) for our dashboard, and based on other picker selection... See more...
Hi all!   What I thought was going to be a fairly simple panel on a dashboard has been giving me fits.  We have a global time picker (Datepkr) for our dashboard, and based on other picker selections from that dashboard would like to display a simple count of events in a timechart for the time window selected by the datepicker, and for the same time window the week prior.  So if someone selected events for the past 4 hours, we would get a line chart of events for the past four hours with a second line of events for events of the last four hours exactly one week prior.  Same deal if someone selected events in the time range Wednesday, t-18 16:00 through Thursday, Oct-19 12:00, they would get events for that range plus a second line for events Wednesday, Oct-11 16:00 through Thursday, Oct-12 12:00.  I think it would get a bit weird as you start selecting increasingly large windows of time larger than one week, but that's ok, for the most part people will be using times less than one week.   I've run into two hurdles so far, one is how to get the second "-7d" time range to be created from the time picker, and then once the two searches can be made, how to effectively merge the two together.   I saw a few posts mentioning using makeresults or addinfo and info_min_time/info_max_time but these don't seem to be resolving correctly (the way I was using them at least), and setting the last week time in the body of the query seems wrong, or at least less useful than having it resolved somewhere that it could be used on other panels.   I tried to add two new tokens to set the past window, but because the time picker can produce times in varying formats this didn't seem to work.  I tried different ways of converting to epoch time and back but didn't get anywhere with that either.   Timepicker config including the eval:   <input type="time" token="Datepkr"> <label>Time Range Picker</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> <change> <eval token="date_last_week.earliest">relative_time($Datepkr.earliest$, "-7d")</eval> <eval token="date_last_week.latest">relative_time($Datepkr.latest$, "-7d")</eval> </change> </input>   I haven't been able to get as far as to get a search that produces the right results, but assuming I can, I'm not sure how to overlay two the times on top of each other since they are different time ranges.  Wouldn't they display end to end?  I'd like them to overlay.   I saw the timewrap function, but given that a time field is required timewrap as well as a time-span for the chart I don't think that would mesh with the time picker.   Maybe something like:   Search for stuff from -7d | eval ReportKey=”Last_Week” | modify the “_time” field | append [subsearch for stuff today | eval ReportKey=”Today”] | timechart it based on ReportKey   Thanks in advance for any help!
I have created a dashboard in dashboard studio. I have a table visualization, see my code below.  So, the "Time" column auto sets my | bin to one minute. When I update my timepicker to say the las... See more...
I have created a dashboard in dashboard studio. I have a table visualization, see my code below.  So, the "Time" column auto sets my | bin to one minute. When I update my timepicker to say the last 7 days it still shows the time |bin as one minute.  How can I dynamically change the |bin to best fit my timepicker selection?   | search cat IN ($t_endpoint$) AND Car IN ($t_car$) | eval Time=strftime(_time,"%Y-%m-%d-%I:%M %p") | stats limit=15 sum(Numbercat) as Numbercat, avg(catTime) as AvgcatSecs by Time, Car, cat    
I get the following error when I try to add a receiver with port 9997 or 514. The following error was reported: SyntaxError: Unexpected token '<', " <p class=""... is not valid JSON. I get the same... See more...
I get the following error when I try to add a receiver with port 9997 or 514. The following error was reported: SyntaxError: Unexpected token '<', " <p class=""... is not valid JSON. I get the same error no matter what port I try to enter.  This is a new installation and this is the first thing I tried to do.  I am somewhat of a novice with splunk.
Our networking team needs to get ASN from public IP addresses. We found the TA-asngen add-on. I put it through splunk-inspect and fixed the failures, added a MaxMind license key in the default/asngen... See more...
Our networking team needs to get ASN from public IP addresses. We found the TA-asngen add-on. I put it through splunk-inspect and fixed the failures, added a MaxMind license key in the default/asngen.conf, installed it in our Splunk Cloud instance. When I try to run the `asngen` command, it gives out this error message:   Exception at "/opt/splunk/etc/apps/TA-asngen/bin/asngen.py", line 55 : maxmind license_key is required     Just wonder if anyone has tried the TA in the cloud. Any thoughts would be much appreciated.
Hi, I have a union'ed search where I am wanting to link different events based on fields that have matching values. My search looks like this: | union [search message=* | spath Field1 | spath Fi... See more...
Hi, I have a union'ed search where I am wanting to link different events based on fields that have matching values. My search looks like this: | union [search message=* | spath Field1 | spath Field2] [search city=* | spath FieldA  | spath FieldB] | table Field1 Field2 FieldA FieldB My current output looks like this: Field1 Field2 FieldA FieldB John Blue         Blue Ohio     Yellow Wyoming   However I need a way to link Field1 to FieldB if Field2=FieldA, where the output would look something like this:  Field1 Field2 FieldA FieldB John Blue Blue Ohio     Yellow Wyoming If there is a way to do something like this, please let me know, even if I need to create new fields. The excess FieldA and FieldB are unimportant if there is not a matching Field2.  please help, please
Hi, I have image stored in sharepoint and i am trying to show it in dashboard. Since it is Splunk cloud i do not have access to place the image under static/app on Search Heads.Below is the code i a... See more...
Hi, I have image stored in sharepoint and i am trying to show it in dashboard. Since it is Splunk cloud i do not have access to place the image under static/app on Search Heads.Below is the code i am using in the dashboard but the image isnt coming up. I did check the url and it is loading the image <html> <centre> <img style="padding-top:60px" height="92" href="https://sharepoint.com/:i:/r/sites/Shared%20Documents/Pictures/Untitled%20picture.png?csf=1&amp;web=1&amp;e=CSz2lp" width="272" alt="Terraform "></img> </centre> </html>  
Hello, Are data transfer costs built into the cost model for Splunk Archive?   Customer is concerned about surprises (in the form of a bill, or data caps) associated with freezing their data into th... See more...
Hello, Are data transfer costs built into the cost model for Splunk Archive?   Customer is concerned about surprises (in the form of a bill, or data caps) associated with freezing their data into the Splunk managed archive solution
I want to write a splunk query which will run over the same timewindow but on a different date selected in the datetime picker.  For ex. lets say I select 8th aug 10am to 8th august 10:15am range in... See more...
I want to write a splunk query which will run over the same timewindow but on a different date selected in the datetime picker.  For ex. lets say I select 8th aug 10am to 8th august 10:15am range in the datepicker my query should give me result for the timewindow 1st aug 10am to 1st aug 10:15am.
Hi folks, I have been trying to create a query that would list index name and earliest event from a list of indexes that started getting events only during the selected time range. First I'd po... See more...
Hi folks, I have been trying to create a query that would list index name and earliest event from a list of indexes that started getting events only during the selected time range. First I'd populate the list of indexes using a query like so    index=_internal source=/opt/splunk/var/log/splunk/cloud_monitoring_console.log* TERM(logResults:splunk-ingestion) | rename data.* as * | fields idx     I want to find out which of the indexes out of this list started to index events for the first time only in the, say, last one month. I tried joining this query over idx like so where `tstats` would give me the earliest event timestamp in the last 6 months (a good approximation of whether that index ever got data before the last one month).   index=_internal source=/opt/splunk/var/log/splunk/cloud_monitoring_console.log* TERM(logResults:splunk-ingestion) | rename data.* as * | fields idx | rename idx as index | join index [ | tstats earliest(_time) as earliest_event where earliest=-6mon latest=now index=* by index | table index earliest_event]    But this is only giving me correct results when I specify an index name in the base query. For some reason, it doesn't give me proper results for all indexes. I tried the `map` command as well passing index dynamically but the performance of that query isn't ideal as there are 100s of indexes. I also tried other commands like append but none would give the outcome as expected. I think that there is an obvious solution here that's somehow eluding me. Appreciate any help around this.
This is an example of an event for EventCode=4726. As you see there are two account name fields which the Splunk App parses as ... two account names     11/19/2023 01:00:38 PM LogName=Security Eve... See more...
This is an example of an event for EventCode=4726. As you see there are two account name fields which the Splunk App parses as ... two account names     11/19/2023 01:00:38 PM LogName=Security EventCode=4726 EventType=0 ComputerName=dc.acme.com SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=1539804373 Keywords=Audit Success TaskCategory=User Account Management OpCode=Info Message=A user account was deleted. Subject: Security ID: Acme\ScriptRobot Account Name: ScriptRobot Account Domain: Acme Logon ID: 0x997B8B20 Target Account: Security ID: S-1-5-21-329068152-1767777339-1801674531-65826 Account Name: aml Account Domain: Acme Additional Information: Privileges -     I want to search for all events with Subject:Account Name = ScriptRobot and then list all Target Account: Account Name. Knowing that multiline regex can be a bit cumbersome - tried the following search string, but it does not work     index="wineventlog" EventCode=4726 | rex "Subject Account Name:\s+Account Name:\s+(?<SubjectAccount>[^\s]+).*\s+Target Account:\s+Account Name:\s+(?<TargetAccount>[^\s]+)"      
I am wondering if there's a way to use the dropdown menu and tokens to display two different results. I am trying to have the dropdown menu have static options of "read" and "write". Read would disp... See more...
I am wondering if there's a way to use the dropdown menu and tokens to display two different results. I am trying to have the dropdown menu have static options of "read" and "write". Read would display this search   index="collectd_test" plugin=disk type=disk_octets plugin_instance=dm-0 | spath output=values0 path=values{0} | spath output=dsnames0 path=dsnames{0} | stats min(values0) as min max(values0) as max avg(values0) as avg by dsnames0 | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)   Write would display this search   index="collectd_test" plugin=disk type=disk_octets plugin_instance=dm-0 | spath output=values1 path=values{1} | spath output=dsnames1 path=dsnames{1} | stats min(values1) as min max(values1) as max avg(values1) as avg by dsnames1 | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)     The only change in the searches as you can see is just the elements in the multivalue field. If there is a way to append the search and have it shown together, that would be helpful as well.
Hi Can you please let me know how to frame splunk query compare a field from search with a field from lookup and find the unmatched ones from the lookup table
Hi All, I am trying to get the top n users who made calls to some APIs over a span of 5 minutes. For example: By the below query, I can see the chart which made calls for a period of time over ... See more...
Hi All, I am trying to get the top n users who made calls to some APIs over a span of 5 minutes. For example: By the below query, I can see the chart which made calls for a period of time over a span of 5 minutes. Query     timechart span=5min count(action) by applicationname Now, I need to select the top n users (applicationname) which had high number of calls only for a span of 5 minutes. In the below image, need the the users with sudden spikes.
Hi, we have the following error in one of the splunk instances: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk l... See more...
Hi, we have the following error in one of the splunk instances: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK. and the following warning exist in the licensing section for the " auto_generated_pool_download-trial " pool: This pool has exceeded its configured poolsize=524288000 bytes. A warning has been recorded for all members whereas the mentioned splunk is not trial version, It's an Enterprise version.  
Hi, I found 2 previous tickets about this cases (https://community.splunk.com/t5/Dashboards-Visualizations/How-to-create-multiple-tabs-in-splunk-dashboard-studio/td-p/598020) I need to convert my d... See more...
Hi, I found 2 previous tickets about this cases (https://community.splunk.com/t5/Dashboards-Visualizations/How-to-create-multiple-tabs-in-splunk-dashboard-studio/td-p/598020) I need to convert my dashboard to studio dashboard from classic (because customers weren't satisfied with the filters in the classic version), i need to add tabs to my dashboard because i can't insert all charts in one page (too much charts and different topics) How can I implement tabs? or maybe a workaround solution? I don't want to create each tab as a different dashboard. Thanks, Maayan
Hi, i need to add two queries so that they could come in different fields in one visualization, one will be the error and one will be success transaction.   index=sso Appid="APP-49" PROD ("Util.va... See more...
Hi, i need to add two queries so that they could come in different fields in one visualization, one will be the error and one will be success transaction.   index=sso Appid="APP-49" PROD ("Util.validateAuth" AND "METHOD_ENTRY")   - ERROR index=sso Appid="APP-49" PROD ("RestTorHandler : hleError :" OR "java.net.SocketException: Connection reset]" OR "Error in processor call." OR level="error" NOT "resubmit the request")      - SUCCESS   need to add both the queries and provide the count for error and count for success but while using this query, sum of the error transaction level!=error so the error count is not matching. index=ss Appid="APP-49" PROD ("Util.validateAuth" AND "METHOD_ENTRY") OR index=sso ("RestTorHandler : hleError :" OR "java.net.SocketException: Connection reset]" OR "Error in processor call." OR level="error" NOT "resubmit the request")  | rex field=_raw " (?<service_name>\w+)-prod" | eval err_flag = if(environment="nonprod", 1,0) | eval success_flag = if(level!="ERROR", 1,0) | stats sum(err_flag) as total_errors, sum(success_flag) as total_successes by service_name   Please help it would be great.