All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

i am trying to get the common data result from the two indexes base on two common fields. ids logs ******* src               target                  cve                                         s... See more...
i am trying to get the common data result from the two indexes base on two common fields. ids logs ******* src               target                  cve                                         service 10.0.0.1      20.2.2.2              CVE-2020-0123                   80 VA logs ****** dst                 cve                                             service 20.2.2.2       CVE-2020-0123                     http   search index=fw sourcetype="ids" cve="*" [search index=va sourcetype="vascanner" | rename dst as target | fields cve target] | table src cve target service
Hi eveyone, I'm try to send pihole.log to my syslog-ng server through an splunk universal forwarder.  Details about my system: I configured following files:   inputs.conf [monitor:///var/log/piho... See more...
Hi eveyone, I'm try to send pihole.log to my syslog-ng server through an splunk universal forwarder.  Details about my system: I configured following files:   inputs.conf [monitor:///var/log/pihole.log] disabled = false sourcetype = pihole:log output.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = 10.20.30.15:514 [tcpout-server://10.20.30.15:514] props.conf [dnsmasq] NO_BINARY_CHECK = true DATETIME_CONFIG = TIME_FORMAT = %b %d %H:%M:%S   The issue I'm gonna get is that the log file on the syslog side looks like this:   Dec 22 12:58:04 10.20.30.5 @ Dec 22 12:58:04 10.20.30.5 Dec 22 12:58:04 10.20.30.5 __s2s_capabilities Dec 22 12:58:04 10.20.30.5 ack=0;compression=0 Dec 22 12:58:04 10.20.30.5 _raw Dec 22 12:58:24 10.20.30.5 --splunk-cooked-mode-v3-- Dec 22 12:58:24 10.20.30.5 pihole Dec 22 12:58:24 10.20.30.5 8089 Dec 22 12:58:24 10.20.30.5 @ Dec 22 12:58:24 10.20.30.5 Dec 22 12:58:24 10.20.30.5 __s2s_capabilities Dec 22 12:58:24 10.20.30.5 ack=0;compression=0 Dec 22 12:58:24 10.20.30.5 _raw   which is not really much Do you have a hint for me to solve this issue? I'd be very happy
Hello, I'm hoping someone is able to help me find out what's going on with Splunk Stream and Netflow because I'm tearing my hair out trying to get it working. I have a separate indexer and search h... See more...
Hello, I'm hoping someone is able to help me find out what's going on with Splunk Stream and Netflow because I'm tearing my hair out trying to get it working. I have a separate indexer and search head and am trying to use the independent stream forwarder. The forwarder host also has UF installed but not Splunk_TA_stream, incidentally I tried getting it working with the Splunk_TA_stream app and was also seeing similar results. SH configuration: Splunk app for stream installed and configured as per https://docs.splunk.com/Documentation/StreamApp/7.3.0/DeployStreamApp/UseStreamtoingestNetflowandIPFIXdata#Configure_search_heads Indexer configuration: $SPLUNK_HOME/etc/apps/splunk_httpinput/local/inputs.conf   [http] disabled = 0 port = 8088 dedicatedIoThreads = 8 [http://streamfwd] description = Splunk Stream HEC disabled = 0 index = main token = <hec_token> indexes = _internal,main [splunk@<indexer> ~]$ netstat -antup | grep 8088 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN 11580/splunkd   Independent forwarder setup: /opt/streamfwd/local/inputs.conf   [streamfwd://streamfwd] splunk_stream_app_location = https://<search_head>:8000/en-us/custom/splunk_app_stream/ stream_forwarder_id = disabled = 0    /opt/streamfwd/local/streamfwd.conf   [streamfwd://streamfwd] authToken = <auth_token_generated_by_curl_config> [streamfwd] httpEventCollectorToken = <HEC_TOKEN> processingThreads = 4 indexer.0.uri = https://<indexer>:8088 netflowReceiver.0.port = 9996 netflowReceiver.0.decoder = netflow netflowReceiver.0.ip = <forwarder_ip>   If i run the search index=main sourcetype="stream:*" the only events I see are:   { [-] endtime: 2020-12-22T12:18:36Z event_name: netFlowOptions exporter_ip: <router_ip> exporter_time: 2020-Dec-22 12:18:36 exporter_uptime: 4273621448 netflow_version: 9 observation_domain_id: 0 seqnumber: 340894 timestamp: 2020-12-22T12:18:36Z }   and running index=_internal sourcetype="stream:*" host="<forwarder>" gives me two sourcetypes, stream:log and stream:stats. stream:log gives me nothing of interest, just decode errors until the template is received, then these errors stop. stream:stats shows me:   { [-] agentMode: 1 ipAddress: <stream_forwarder_ip> netflow: { [-] NetflowDataHandlers: [ [-] { [-] NetflowDecoders: [ [-] { [-] name: Netflow processedRecords: 210991 } ] droppedPackets: 0 id: 0 } ] NetflowReceivers: [ [-] { [-] id: 0 recvdBytes: 8861500 running: true } ] eventsIn: 210964 eventsOut: 210964 id: NetflowManager running: true } osName: Linux senders: [ [-] { [-] busyConnections: 0 configTemplateName: connections: [ [-] { [-] endpoint: 0.0.0.0:0 id: 0 lastConnect: 2020-12-22T12:15:55.118285Z numErrors: 5 numSent: 20 queueSize: 0 status: closed workStatus: idle } { [-] endpoint: 0.0.0.0:0 id: 1 lastConnect: 2020-12-22T12:14:54.193007Z numErrors: 4 numSent: 27 queueSize: 0 status: closed workStatus: idle } { [-] endpoint: 0.0.0.0:0 id: 2 lastConnect: 2020-12-22T12:14:54.200473Z numErrors: 3 numSent: 20 queueSize: 0 status: closed workStatus: idle } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } ] dateLastUpdated: 1608637900306 encrypted: true host: <search_head> id: <some_id> key: lastErrorCode: 0 name: numBytes: 4367915 numErrors: 41 numStreams: 1 openConnections: 0 port: 8000 requestsQueued: 0 requestsSent: 229 running: true streamForwarderGroups: [ [+] ] streamForwarderId: <forwarder_fqdn> streams: [ [-] { [-] bytes: 8016506 bytes_in: 8016506 bytes_out: 0 delta_bytes: 339112 delta_bytes_in: 339112 delta_bytes_out: 0 delta_events: 8924 delta_raw_bytes: 5889905 events: 210964 id: TEST_NETFLOW raw_bytes: 130470120 stats_only: 0 } ] } ] sniffer: { [+] } systemType: x86_64 versionNumber: 7.3.0 }   which suggests that netflow receivers are working as expected. Running a tcpdump on the receiver host I can see that I am receiving genuine netflow v9 which is readable using wireshark. I've looked at splunkd.log on the indexer and I'm not seeing anything that relates to the stream forwarder. I'm at a loss where to look next. I have gone through the documentation countless times over the last few days to make sure I'm not missing anything. Any help would be greatly appreciated! Thanks
WARN [Indexer] Configuration initialization for C:\Program Files\Splunk\var\run\searchpeers\Seachheadbundle took longer than expected (1359ms) when dispatching a search with search ID remote_searchhe... See more...
WARN [Indexer] Configuration initialization for C:\Program Files\Splunk\var\run\searchpeers\Seachheadbundle took longer than expected (1359ms) when dispatching a search with search ID remote_searchhead_user__usert_bundle. This usually indicates problems with underlying storage performance. hi all , we have a 6(windows,RAID storage) indexers , 1CM(windows),1DS(windows) and 2(windows) search head. we are getting above warning every time when we run any search . The warning is from every indexers. please help us in resolving above warning. @kamlesh_vaghela @splunk @Anonymous @gcusello @to4kawa @renjith_nair @ITWhisperer 
Hi, Below is my splunk search query & Screenshot. I want eliminate TrustedLocation = "Zscaler Miami III" from my result. Please help me with splunk query. I tried but unable to acheive it. index=t... See more...
Hi, Below is my splunk search query & Screenshot. I want eliminate TrustedLocation = "Zscaler Miami III" from my result. Please help me with splunk query. I tried but unable to acheive it. index=test "vendorInformation.provider"=IPC | eval Event_Date=mvindex('eventDateTime',0) | eval UPN=mvindex('userStates{}.userPrincipalName',0) | eval Logon_Location=mvindex('userStates{}.logonLocation',0) | eval Event_Title=mvindex('title',0) | eval Event_Severity=mvindex('severity',0) | eval AAD_Acct=mvindex('userStates{}.aadUserId',0) | eval LogonIP=mvindex('userStates{}.logonIp',0) | eval Investigate=+"https://portal.azure.com/#blade/Microsoft_AAD_IAM/RiskyUsersBlade/userId/".AAD_Acct | stats count by Event_Date, Event_Title, Event_Severity UPN Logon_Location LogonIP Investigate | lookup WeirMFAStatusLookup.csv userPrincipalName as UPN | lookup Lookup_EMPADInfo.csv userPrincipalName as UPN | lookup WeirSiteCode2IP.csv public_ip as LogonIP | lookup ZscalerIP CIDR_IP as LogonIP | lookup WeirTrustedIPs.csv TrustedIP as LogonIP | fillnull value="Unknown Site" site_code | eval AD_Location=st + ", " + c | fillnull value="OK" MFAStatus | eval TrustedLocation=if(isnull(TrustedLocation), ZLocation, TrustedLocation) | rename site_code as LogonSiteCode | table Event_Date, Event_Title, Event_Severity UPN LogonIP LogonSiteCode Logon_Location AD_Location TrustedLocation MFAStatus count Investigate | sort - Event_Date @soutamo @ITWhisperer @gcusello @thambisetty @richgalloway @to4kawa   
Hi, I have installed the Akamai Siem App on a Heavy Forwarder and did some initial testing and besides not having proper authentication at the Akamai side, the app was working and sending data to my... See more...
Hi, I have installed the Akamai Siem App on a Heavy Forwarder and did some initial testing and besides not having proper authentication at the Akamai side, the app was working and sending data to my indexers. After they changed something at our user level and asked us to retry I keep getting the following error messages and I can't find the root cause of them: 12-22-2020 12:30:28.303 +0100 ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Message : HTTP 401 -- call not properly authenticated, Exception : com.splunk.HttpException: HTTP 401 -- call not properly authenticated 12-22-2020 12:30:28.303 +0100 ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" at com.splunk.HttpException.create(HttpException.java:84) 12-22-2020 12:30:28.303 +0100 ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" at com.splunk.HttpService.send(HttpService.java:500) 12-22-2020 12:30:28.303 +0100 ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" at com.splunk.Service.send(Service.java:1295) 12-22-2020 12:30:28.303 +0100 ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" at com.akamai.siem.Main.getValuesFromKVStore(Main.java:802) 12-22-2020 12:30:28.303 +0100 ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" at com.akamai.siem.Main.streamEvents(Main.java:455) 12-22-2020 12:30:28.303 +0100 ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" at com.splunk.modularinput.Script.run(Script.java:74) 12-22-2020 12:30:28.303 +0100 ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" at com.splunk.modularinput.Script.run(Script.java:48) 12-22-2020 12:30:28.303 +0100 ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" at com.akamai.siem.Main.main(Main.java:116) 12-22-2020 12:30:28.303 +0100 INFO ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, end streamEvents 12-22-2020 12:30:28.303 +0100 ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" javax.xml.stream.XMLStreamException: No element was found to write: java.lang.ArrayIndexOutOfBoundsException: -1 I'm running openjdk version "1.8.0_265" which initially worked fine and I'm using the latest version of the Akamai Siem app which is 1.4.8. Splunk version is 7.3.4 and should be fine. Anybody have some clues for this? Regards
I am trying to export/import dashboard from one controller to another through Curl Command but getting error of invalid JSON format .As I have exported dashboard of another version(Dashboard Version ... See more...
I am trying to export/import dashboard from one controller to another through Curl Command but getting error of invalid JSON format .As I have exported dashboard of another version(Dashboard Version 4.0) and I am importing it to another version(Dashboard Version 3.0). Invalid JSON format is coming when I am trying to import it.
I am trying to search for MISP events by their name, which is present in 'info' field. For this purpose I'm using 'other' and putting following json: {"info":"text to search for"}. Query does not gi... See more...
I am trying to search for MISP events by their name, which is present in 'info' field. For this purpose I'm using 'other' and putting following json: {"info":"text to search for"}. Query does not give any error, but results are not really related to the text I specified. I'm just receiving first 10 events present in the MISP, even if I specify whole title in the query, not only the keyword. Am I doing something wrong? I've also tried approach with using format block and double braces, as mentioned here: Solved: Phantom MISP "Run Query" action - Splunk Community but no difference. Is there any way to search for events by keywords in the 'info' field?
Hello, We are running DB Connect v3.1.4 on a Linux machine and for the first time we are trying to connect to an SQL Server database via Windows authentication. We were always using local database a... See more...
Hello, We are running DB Connect v3.1.4 on a Linux machine and for the first time we are trying to connect to an SQL Server database via Windows authentication. We were always using local database accounts before, now the database admin configured one of our service account registered in the Active Directory to access the database. Our user is "domain\username", we tried using the driver "MS-SQL Server Using MS Generic Driver With Windows Authentication", but when testing the new connection we get the error " This driver is not configured for integrated authentication. ClientConnectionId: etc... We checked the box in the identity creation to specify the domain and Splunk DB Connect does display the driver "MS-SQL Server Using MS Generic Driver With Windows Authentication" with a green check mark in the list of installed drivers. Do we need to install "MS-SQL Server Using jTDS Driver With Windows Authentication" instead ? Or is there another issue ? Thank you !
I have an environment where I'm using a datamodel with the _internal index. My datamodel_summary is created in the path where my _internaldb is. Here is my index definition:       [_internal] hom... See more...
I have an environment where I'm using a datamodel with the _internal index. My datamodel_summary is created in the path where my _internaldb is. Here is my index definition:       [_internal] homePath = /opt/splunk/var/lib/splunk/_internaldb/db coldPath = /opt/splunk/var/lib/splunk/_internaldb/colddb thawedPath = /opt/splunk/var/lib/splunk/_internaldb/thaweddb tstatsHomePath = /opt/splunk/var/lib/splunk/_internaldb/datamodel_summary     The documentation suggests that the tstatshomepath value cannot be used with smartstore. How do I define a datamodel with Smartstore otherwise?
In the production environment, the following message will be displayed and the license cannot be renewed until the renewal date. "failed to add because: license is from the future; its active span h... See more...
In the production environment, the following message will be displayed and the license cannot be renewed until the renewal date. "failed to add because: license is from the future; its active span has not begun yet" I found similar question in Splunk Answers. https://community.splunk.com/t5/Archive/failed-to-add-because-license-is-from-the-future-its-active-span/m-p/44122 But as far as I can verify, this doesn't seem to be the case. Has anyone had the same thing happened?
Greetings!!   How to restart udp port 514 that is configured on public ip x.x.x.x  all the syslogs sender are configured to send the data to x.x.x.x 514 , where this public IP(x.x.x.x) is the splu... See more...
Greetings!!   How to restart udp port 514 that is configured on public ip x.x.x.x  all the syslogs sender are configured to send the data to x.x.x.x 514 , where this public IP(x.x.x.x) is the splunk log server collector. BUT now , I can't recive logs into splunk log collector, and if I test this public IP by pinging is replying good, but port is not listening when I test with telnet it is not connecting, how May I solve this???? I need your help?  how to do to put again this service to listening? so that even if I do telnet can respond as usual?   Thank you in advance!    
While configuring the AAD User log, I am getting this error. Is there anyone who can help me regrading this?   --------------------------------------------------------------------------------------... See more...
While configuring the AAD User log, I am getting this error. Is there anyone who can help me regrading this?   ---------------------------------------------------------------------------------------- 2020-12-22 08:42:59,793 ERROR pid=13688 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_ms_aad\aob_py3\modinput_wrapper\base_modinput.py", line 128, in stream_events self.collect_events(ew)   File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\MS_AAD_user.py", line 76, in collect_events input_module.collect_events(self, ew)   File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\input_module_MS_AAD_user.py", line 36, in collect_events users_response = azutils.get_items_batch(helper, access_token, url)   File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_azure_utils\utils.py", line 55, in get_items_batch raise e   File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_azure_utils\utils.py", line 49, in get_items_batch r.raise_for_status()   File "C:\Program Files\Splunk\etc\apps\TA-MS-AAD\bin\ta_ms_aad\aob_py3\requests\models.py", line 940, in raise_for_status   raise HTTPError(http_error_msg, response=self)   requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://graph.microsoft.com/beta/users/ -------------------------------------------------------------------------------------------------------------------   Thanks in advance .... Collapse
Hello the issue I am having is with the following command:   ./splunk restart      When I try to restart I get the following message:   As Su user: Failed to run splunk as SPLUNK_OS_USER. This... See more...
Hello the issue I am having is with the following command:   ./splunk restart      When I try to restart I get the following message:   As Su user: Failed to run splunk as SPLUNK_OS_USER. This command can only be run by bootstart user. without su user: please run 'splunk ftr' as boot-start user     I am not understanding what it is asking me to do...   I want to mention that puppet is the tool we use to deploy the UF to our linux servers. I am trying to restart the UF because I want to see if the linux server will  use the splunk server as a deploymentserver by adding a deploymentclient.conf file to SplunkUniversalForwarder/etc/system/local.
Hi Team, I have a splunk search which results in the below table...   Col1 Col2 Col3 Col4 Row1 X X X X Row2 X X X X Row3 X X X X   The Col* is dynamic based the tim... See more...
Hi Team, I have a splunk search which results in the below table...   Col1 Col2 Col3 Col4 Row1 X X X X Row2 X X X X Row3 X X X X   The Col* is dynamic based the time value here its set to 4 month. Each column represent a column with the values from 0-99.   Jan20 Feb20 Mar20 Apr20 Row1 0 8 3 4 Row2 9 9 7 5 Row3 8 1 7 1   I want check Col2 - Col1 and if the Col2 value is less than Col1 value it should create a new colum and with values like Increasing ,Decreasing, Nothing. Expecting the result   Jan20 Feb20 Comp_of_Feb_Minus_Jan Mar20 Apr20 Comp_of_Apr_Minus_Mar Row1 0 8 Increased 3 4 Increased Row2 9 9 Nothing changed 7 5 Decrease Row3 8 1 Decrease 7 1 Decrease  
Hi All, Currently, We have installed Splunk Add-on for Microsoft SCOM and Enabled Default "Performance" Template. SCOM Performance Data collected @ Splunk, But, it is collecting entire SCOM Enabled ... See more...
Hi All, Currently, We have installed Splunk Add-on for Microsoft SCOM and Enabled Default "Performance" Template. SCOM Performance Data collected @ Splunk, But, it is collecting entire SCOM Enabled Performance Rules Data. We need only Disk, Memory and Processor. How do we restrict the Splunk SCOM Add-on to filter and send only these 3 Parameters? (We don't want other parameter performance data SQL, IIS etc to send it to Splunk.)     
Hello, I am hoping this is easy and I am blanking. I have a data source the logs what work order is in station one. I am looking to get an offline_time based on when a work order reaches a certain s... See more...
Hello, I am hoping this is easy and I am blanking. I have a data source the logs what work order is in station one. I am looking to get an offline_time based on when a work order reaches a certain station.  Ex  _time             WO         count 11:45             1231             1 11:40             1232              2  11:35             1233              3  11:30             1234              4 ..... etc So the _time is when the work starts and when count reaches a certain number, the work order would be done on the line(count would equal 35 in my case) . I would like to collect the _time of what is in count=1 when the count reaches 35.  Thanks. 
Hi, Are there apps to help with the extraction of sourcetype = linux_syslog. I have hosts(solaris,rhel,etc) sending logs over udp on discrete ports and the limited fields and selected fields are rea... See more...
Hi, Are there apps to help with the extraction of sourcetype = linux_syslog. I have hosts(solaris,rhel,etc) sending logs over udp on discrete ports and the limited fields and selected fields are really limited. Yes, I know it is not recommended to send syslog directly to splunk but this is will have to do until we can purchase hardware and setup a syslog server. Also, I am not able to install UF on these hosts either. Any help is much appreciated!
Hi there! I have a custom query that produces an output similar to this ...     | makeresults | eval data= "Name=ServerA IP=1.1.1.1 OS=\"Windows 2016\" Software=Word;Name=ServerA IP=1.1.1.1 O... See more...
Hi there! I have a custom query that produces an output similar to this ...     | makeresults | eval data= "Name=ServerA IP=1.1.1.1 OS=\"Windows 2016\" Software=Word;Name=ServerA IP=1.1.1.1 OS=\"Windows 2016\" Software=Paint;Name=ServerA IP=1.1.1.1 OS=\"Windows 2016\" Software=VMWare Tools;Name=ServerB IP=1.1.1.2 OS=\"Windows 2016\" Software=Word;Name=ServerB IP=1.1.1.2 OS=\"Windows 2016\" Software=Paint;Name=ServerB IP=1.1.1.2 OS=\"Windows 2016\" Software=VMWare Tools;" | makemv data delim=";" | mvexpand data | rename data as _raw | KV | table Name IP OS Software       My goal is to remove some of the redundant data on the output and produce something like this, where each software stills has its own row.  * Image edited w/ Snagit The reason why I'm looking into this is because I want the CSV export to look w/ the exact same format. I have tried adding this to the query     | stats values(Software) as Software by Name, IP, OS       Which puts me closer to what I want, although when I export the data to CSV all the software show up under one cell which is fine when you have 2 or 3 but it is definitely a no go when you have +100 software / asset. Any ideas? TIA!
I am running Splunk 7.32 preparing to upgrade to Splunk 8.1.1 so I know I need to upgrade my forwarder's, before moving to Splunk 8 for compatibility. When I go to download the Universal Forwarder fo... See more...
I am running Splunk 7.32 preparing to upgrade to Splunk 8.1.1 so I know I need to upgrade my forwarder's, before moving to Splunk 8 for compatibility. When I go to download the Universal Forwarder for my 2012 servers it only shows 7.3.8 as one compatible with 2012. So does that mean Splunk version 8 does not fully support Windows server 2012? Thanks for any assistance.