Activity Feed
- Posted Re: Search generates this error - Regex: regular expression is too large on Splunk Search. 10-03-2024 02:56 PM
- Posted Re: Search generates this error - Regex: regular expression is too large on Splunk Search. 10-03-2024 02:54 PM
- Karma Re: Search generates this error - Regex: regular expression is too large for sainag_splunk. 10-03-2024 02:54 PM
- Posted Re: Search generates this error - Regex: regular expression is too large on Splunk Search. 10-03-2024 02:45 PM
- Posted Re: Search generates this error - Regex: regular expression is too large on Splunk Search. 10-03-2024 10:16 AM
- Posted Search generates this error - Regex: regular expression is too large on Splunk Search. 10-03-2024 09:22 AM
- Posted Re: Dashboard Studio count input on Dashboards & Visualizations. 09-06-2024 10:00 AM
- Posted Re: Dashboard Studio count input on Dashboards & Visualizations. 09-05-2024 02:33 PM
- Posted Dashboard Studio count input on Dashboards & Visualizations. 09-05-2024 01:10 PM
- Posted Re: In Dashboard Studio using count of results in a different section on Dashboards & Visualizations. 09-05-2024 11:52 AM
- Karma Re: In Dashboard Studio using count of results in a different section for ITWhisperer. 09-05-2024 11:52 AM
- Posted In Dashboard Studio using count of results in a different section on Dashboards & Visualizations. 09-05-2024 10:05 AM
- Posted Re: Adding asterisk to host list on Splunk Search. 08-28-2024 01:16 PM
- Posted Re: Handling nulls in a string on Splunk Search. 08-28-2024 12:39 PM
- Posted Adding asterisk to host list on Splunk Search. 08-28-2024 12:32 PM
- Posted Re: Handling nulls in a string on Splunk Search. 08-21-2024 09:30 AM
- Posted Handling nulls in a string on Splunk Search. 08-20-2024 02:39 PM
- Posted Re: Line breaking odd issue on Splunk Search. 07-31-2024 01:50 PM
- Karma Re: Line breaking odd issue for PickleRick. 07-31-2024 01:50 PM
- Karma Re: Line breaking odd issue for yuanliu. 07-31-2024 01:50 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
10-03-2024
02:56 PM
Thanks for the assistance @sainag_splunk . I didn't know about some of the btool options. I normally do btool --debug [inputs|props|transforms] list <stanza>
... View more
10-03-2024
02:54 PM
The solution was filtering what was returned. The search went from 1139 users reporting up to 233. The 233 didn't error.
... View more
10-03-2024
02:45 PM
@sainag_splunkI didn't get any results back from the searches. This isn't surprising since the information is a csv file ingested by Splunk for reference. We don't do any modifications of the data.
... View more
10-03-2024
10:16 AM
@sainag_splunkThe command doesn't return anything. Is there supposed to be an index or sourcetype in the command?
... View more
10-03-2024
09:22 AM
This is the search with some anonymization. index=index_1 sourcetype=sourcetype_1 field_1 IN (
[ search index=index_2 field_2 IN (
[ search index=index_2 field_2=abcdefg
| fields field_3
| mvcombine field_3 delim=" "
| nomv field_3
| dedup field_3
| sort field_3
| return $field_3])
| fields field_3
| sort field_3
| mvcombine field_3 delim=" "
| nomv field_3]) The deepest subsearch returns a list of managers that report to a director, 10 names. The subsearch returns a list of users who report to those managers, 1137 names. If I run the search like this, I get output. index=index_1 sourcetype=sourcetype_1 field_1 IN (1137 entries) I can't find a reason that the first search returns this, 'Regex: regular expression is too large', since there is no command that uses regex. I can run each subsearch without any issues. I can't find anything in the _internal index. Any thoughts on why this is happening or a better search? TIA, Joe
... View more
09-06-2024
10:00 AM
This works.
| makeresults
| fields - _time
| eval hosts="$servers_entered$"
| makemv delim="," hosts
| eval count=mvcount(hosts)
| table count
... View more
09-05-2024
02:33 PM
@ITWhispererThanks, but those didn't work. I tried both of these. | makeresults
| fields - _time
| eval count=mvcount($servers_entered$) mvcount($servers_entered$) The first errors. The second returns 0.
... View more
09-05-2024
01:10 PM
I'm working with Dashboard Studio for the first time and I've got another question. In the input on the Dashboard, I set this $servers_entered$. I thought I had a solution for counting how many items are in $servers_entered$, but I found a case that failed. This is what $servers_entered$ looks like. host_1, host_2, host_3, host_4, ..., host_n What I need is a way of counting how many entries are in $servers_entered$. So far the commands I've tried have failed. What would work? TIA, Joe
... View more
Labels
- Labels:
-
Dashboard Studio
-
token
09-05-2024
11:52 AM
Thanks @ITWhisperer . That is the solution.
... View more
09-05-2024
10:05 AM
I'm working with Dashboard Studio for the first time and I've got a question. Originally I created a table search that returns data depending on what is in the $servers_entered$ field. That works. I have been asked to add two single value fields. The first is showing the number of servers in the $servers_entered$ field and that works. The second is showing the number of servers in the table search. There should be a way of linking that information, but I can't figure out how. I could run the search again, but that is rather inefficient. How do you tie the search result count from a table search to a single value field? TIA, Joe
... View more
Labels
- Labels:
-
Dashboard Studio
-
single value
-
table
-
token
08-28-2024
01:16 PM
Thanks @PickleRick for answering. This is what I found works. index=os_* (`wineventlog_security` OR sourcetype=linux_secure)
[| tstats count WHERE index=os_* (source=* OR sourcetype=*) host IN ( $servers_entered$ ) by host
| dedup host
| eval host=host+"*"
| table host]
| dedup host
| eval sourcetype=if((sourcetype == "linux_secure"),sourcetype,source)
| fillnull value=""
| table host, index, sourcetype, _raw
... View more
08-28-2024
12:39 PM
@bowesmana, @gcusello, and @yuanliu thanks for the responses. This has been shelved due to funding issues. If it gets funded, we will go back to the vendor and see if they can add something that will say this is new or timestamp it so we can keep track that way.
... View more
08-28-2024
12:32 PM
I'm working on a dashboard in which the user enters a list of hosts. The issue I'm running into is they must add an asterisk to the host name or it isn't found in the search. This what the SPL looks like. index=os_* (`wineventlog_security` OR sourcetype=linux_secure) host IN ( host1*, host2*, host3*, host4*, host5*, host6*, host7*, host8* ) earliest=-7d@d
| dedup host
| eval sourcetype=if(sourcetype = "linux_secure", sourcetype, source)
| fillnull value=""
| table host, index, sourcetype, _raw If there is no * then there are no results. What I would like to be able to do is have them enter hostname, FQDN, and either upper or lower case and the SPL would change it to lower case, remove any FQDN parts, add the *, and then search. So far I haven't come up with SPL that works. Any thoughts? TIA, Joe
... View more
- Tags:
- spl
08-21-2024
09:30 AM
Buongiorno Giuseppe, I see what you are saying, but I don't think that will work. Here is what is in an event. {"timestamp": "2024-08-20 15:30:00.837000", "data_type": "finding_export", "domain_id": "my_domain_id", "domain_name": "my_domain_name", "path_id": "T0MarkSensitive", "path_title": "My Path Title", "user": "my_user"} Every 15 minutes the binary goes to the API and pulls events. Most of the events are duplicates except for the timestamp. There may or may not be a new event which needs to be alerted on. The monitoring team doesn't want to see any duplication, thus the lookup to save what has already come through. Now the issue is that not all the fields have values all the time. When a field has no value the SHA256 command doesn't work. Which is why I asked is there a better way than doing isnull on each field. Ciao, Joe
... View more
08-20-2024
02:39 PM
I've got this search index=my_index data_type=my_sourcetype earliest=-15m latest=now
| eval domain_id=if(isnull(domain_id), "NULL_domain_id", domain_id)
| eval domain_name=if(isnull(domain_name), "NULL_domain_name", domain_name)
| eval group=if(isnull(group), "NULL_Group", group)
| eval non_tier_zero_principal=if(isnull(non_tier_zero_principal), "NULL_non_tier_zero_principal", non_tier_zero_principal)
| eval path_id=if(isnull(path_id), "NULL_path_id", path_id)
| eval path_title=if(isnull(path_title), "NULL_path_title", path_title)
| eval principal=if(isnull(principal), "NULL_principal", principal)
| eval tier_zero_principal=if(isnull(tier_zero_principal), "NULL_tier_zero_principal", tier_zero_principal)
| eval user=if(isnull(user), "NULL_user", user)
| eval key=sha512(domain_id.domain_name.group.non_tier_zero_principal.path_id.path_title.principal.tier_zero_principal.tier_zero_principal.user)
| table domain_id, domain_name, group, non_tier_zero_principal, path_id, path_title, principla, tier_zero_principal, user, key Due to the fact that we get repeating events where the only difference is the timestamp, I'm trying to put together a lookup that contains the sha512 key and that will allow an event to be skipped. What I found is I can't have a blank value in the sha512 command. Does anyone have a better way of doing this, then what I have? TIA, Joe
... View more
07-31-2024
01:50 PM
@PickleRickThat was the issue. I was only pushing to the UF and not the indexers. Sometimes I forget that props.conf has parts that go to the indexer and parts go to the search heads.
... View more
07-31-2024
10:11 AM
@yuanliuSo this section of the props.conf spec MAX_EVENTS = <integer>
* The maximum number of input lines to add to any event.
* Splunk software breaks after it reads the specified number of lines.
* Default: 256 takes precedence over the LINE_BREAKER?
... View more
07-31-2024
09:33 AM
I'm working with a 9.1.2 UF on Linux. This is the props.conf [stanza]
#
# Input-time operation on Forwarders
#
LINE_BREAKER = ([\r\n]+)
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
TRUNCATE = 999
DATETIME_CONFIG = CURRENT This is the contents of the file Splunk Reporting Hosts as of 07/31/2024 12:05:01 UTC
host
hostname1
hostname2
hostname3
hostname4
...
hostname1081 There are 1,083 lines in the file. I used od -cx to verify there is \n at the end of each line. For some reason, the last entry from a search consists of the first 257 lines from the file, and then the remaining lines are individual entries. I didn't have DATETIME_CONFIG in the stanza, so I thought that might be the issue. It is now, and it is still an issue. I'm out of ideas. Anyone see this before or have an idea on how to resolve this? TIA, Joe
... View more
- Tags:
- line breaking
Labels
- Labels:
-
field extraction
05-09-2024
09:23 AM
I have a dashboard that I use when checking if a server is compliant. It looks normal in the dashboard but when I export it as a PDF the last column gets moved to a new page. I found this in ./etc/system/bin/pdfgen_endpoint.py DEFAULT_PAPER_ORIENTATION = 'portrait' What I can't find is a way of overriding the default to change it to landscape. Does such a file exist? If not, beyond changing the code, any ideas on how to get a landscape report so the final column will be on the same page? TIA Joe
... View more
Labels
- Labels:
-
Classic dashboard
04-30-2024
10:03 AM
Thanks. Since transforms.conf doesn't have the limitations of EXTRACT, I finally got it working.
... View more
04-29-2024
10:47 AM
I'm working with a field named Match_Details.match.properties.user. It contains domain\user information that I'm trying to split into domain and user. I can't use EXTRACT in props.conf because of this restriction. EXTRACT-<class> = [<regex>|<regex> in <src_field>]
NOTE: <src_field> has the following restrictions:
* It can only contain alphanumeric characters and underscore
(a-z, A-Z, 0-9, and _). Is this also true with REPORT in transforms.conf? I can't find any documentation that tells me. TIA, Joe
... View more
Labels
- Labels:
-
field extraction
04-02-2024
01:45 PM
Thanks. I hadn't thought of that. Since I posted the question, NetSkope came back with a solution. I was sent this conf_file_stanzas = conf_file_object.get_all()
replace the above line with following:
conf_file_stanzas = conf_file_object.get_all(only_current_app=True) With that the issue was resolved. The code was trying to get information from another TA.
... View more
03-11-2024
03:25 PM
I'm getting this error message in the log file, solnlib.credentials.CredentialNotExistException: Failed to get password of realm=. According to this page, https://splunk.github.io/addonfactory-solutions-library-python/credentials/#solnlib.credentials.CredentialNotExistException , this is due to the username not being valid. I'm trying to work out how to get what is passed to credentials.py since the information in the username doesn't make sense to me. Is there anyway of debugging credentials.py, I tried to put print statements in, but the TA UI didn't like it. I had to remove the print statements to get the UI working again. I've tried debugging via command line but always get stuck at this point, session_key = sys.stdin.readline().strip(). I can't work out what I need to do to see where the user information is coming from. Any help on how I can debug this? TIA, Joe
... View more
- Tags:
- debug
Labels
- Labels:
-
configuration
-
dashboard
-
troubleshooting
02-28-2024
12:01 PM
Hi All,
I'm trying to debug netskope_email_notification.py from the TA-NetSkopeAppForSplunk by running this command.
splunk cmd python -m pdb netskope_email_notification.py
It runs until it hits this line
session_key = sys.stdin.readline().strip()
How do I get past this? Maybe something like this, but with a session key.
splunk cmd python -m pdb netskope_email_notification.py < session_key
If so, how do you create an external session key?
TIA,
Joe
... View more
- Tags:
- command line
- python
Labels
- Labels:
-
scripted input
02-27-2024
11:19 AM
@PickleRickYou are correct. It is poorly written. I have already made three suggestions to them of which one is to split it into an ingest piece and a search piece.
... View more